00:00:00.001 Started by upstream project "autotest-per-patch" build number 127183 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.076 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.077 The recommended git tool is: git 00:00:00.077 using credential 00000000-0000-0000-0000-000000000002 00:00:00.079 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.168 Fetching changes from the remote Git repository 00:00:00.170 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.233 Using shallow fetch with depth 1 00:00:00.233 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.233 > git --version # timeout=10 00:00:00.284 > git --version # 'git version 2.39.2' 00:00:00.284 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.322 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.322 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.183 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.195 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.209 Checking out Revision 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b (FETCH_HEAD) 00:00:06.209 > git config core.sparsecheckout # timeout=10 00:00:06.220 > git read-tree -mu HEAD # timeout=10 00:00:06.237 > git checkout -f 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=5 00:00:06.255 Commit message: "jjb/jobs: add SPDK_TEST_SETUP flag into configuration" 00:00:06.255 > git rev-list --no-walk 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=10 00:00:06.364 [Pipeline] Start of Pipeline 00:00:06.379 [Pipeline] library 00:00:06.381 Loading library shm_lib@master 00:00:06.381 Library shm_lib@master is cached. Copying from home. 00:00:06.396 [Pipeline] node 00:00:21.399 Still waiting to schedule task 00:00:21.399 Waiting for next available executor on ‘vagrant-vm-host’ 00:03:24.932 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:03:24.935 [Pipeline] { 00:03:24.948 [Pipeline] catchError 00:03:24.950 [Pipeline] { 00:03:24.965 [Pipeline] wrap 00:03:24.974 [Pipeline] { 00:03:24.984 [Pipeline] stage 00:03:24.986 [Pipeline] { (Prologue) 00:03:25.010 [Pipeline] echo 00:03:25.011 Node: VM-host-WFP7 00:03:25.018 [Pipeline] cleanWs 00:03:25.031 [WS-CLEANUP] Deleting project workspace... 00:03:25.031 [WS-CLEANUP] Deferred wipeout is used... 00:03:25.050 [WS-CLEANUP] done 00:03:25.236 [Pipeline] setCustomBuildProperty 00:03:25.323 [Pipeline] httpRequest 00:03:25.351 [Pipeline] echo 00:03:25.353 Sorcerer 10.211.164.101 is alive 00:03:25.364 [Pipeline] httpRequest 00:03:25.369 HttpMethod: GET 00:03:25.369 URL: http://10.211.164.101/packages/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:03:25.370 Sending request to url: http://10.211.164.101/packages/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:03:25.371 Response Code: HTTP/1.1 200 OK 00:03:25.371 Success: Status code 200 is in the accepted range: 200,404 00:03:25.372 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:03:25.515 [Pipeline] sh 00:03:25.796 + tar --no-same-owner -xf jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:03:25.812 [Pipeline] httpRequest 00:03:25.830 [Pipeline] echo 00:03:25.832 Sorcerer 10.211.164.101 is alive 00:03:25.841 [Pipeline] httpRequest 00:03:25.846 HttpMethod: GET 00:03:25.846 URL: http://10.211.164.101/packages/spdk_208b98e37a48134d9a5ceb19a52ecf58347d6aee.tar.gz 00:03:25.847 Sending request to url: http://10.211.164.101/packages/spdk_208b98e37a48134d9a5ceb19a52ecf58347d6aee.tar.gz 00:03:25.848 Response Code: HTTP/1.1 200 OK 00:03:25.849 Success: Status code 200 is in the accepted range: 200,404 00:03:25.850 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_208b98e37a48134d9a5ceb19a52ecf58347d6aee.tar.gz 00:03:28.027 [Pipeline] sh 00:03:28.318 + tar --no-same-owner -xf spdk_208b98e37a48134d9a5ceb19a52ecf58347d6aee.tar.gz 00:03:30.873 [Pipeline] sh 00:03:31.156 + git -C spdk log --oneline -n5 00:03:31.156 208b98e37 raid: Generic changes to support DIF/DIX for RAID 00:03:31.156 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:03:31.156 fc2398dfa raid: clear base bdev configure_cb after executing 00:03:31.156 5558f3f50 raid: complete bdev_raid_create after sb is written 00:03:31.156 d005e023b raid: fix empty slot not updated in sb after resize 00:03:31.180 [Pipeline] writeFile 00:03:31.203 [Pipeline] sh 00:03:31.510 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:31.522 [Pipeline] sh 00:03:31.804 + cat autorun-spdk.conf 00:03:31.804 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:31.804 SPDK_TEST_NVMF=1 00:03:31.804 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:31.804 SPDK_TEST_URING=1 00:03:31.804 SPDK_TEST_USDT=1 00:03:31.804 SPDK_RUN_UBSAN=1 00:03:31.804 NET_TYPE=virt 00:03:31.804 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:31.811 RUN_NIGHTLY=0 00:03:31.813 [Pipeline] } 00:03:31.829 [Pipeline] // stage 00:03:31.845 [Pipeline] stage 00:03:31.847 [Pipeline] { (Run VM) 00:03:31.860 [Pipeline] sh 00:03:32.141 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:32.141 + echo 'Start stage prepare_nvme.sh' 00:03:32.141 Start stage prepare_nvme.sh 00:03:32.141 + [[ -n 2 ]] 00:03:32.141 + disk_prefix=ex2 00:03:32.141 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:03:32.141 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:03:32.141 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:03:32.141 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:32.141 ++ SPDK_TEST_NVMF=1 00:03:32.141 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:32.141 ++ SPDK_TEST_URING=1 00:03:32.141 ++ SPDK_TEST_USDT=1 00:03:32.142 ++ SPDK_RUN_UBSAN=1 00:03:32.142 ++ NET_TYPE=virt 00:03:32.142 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:32.142 ++ RUN_NIGHTLY=0 00:03:32.142 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:03:32.142 + nvme_files=() 00:03:32.142 + declare -A nvme_files 00:03:32.142 + backend_dir=/var/lib/libvirt/images/backends 00:03:32.142 + nvme_files['nvme.img']=5G 00:03:32.142 + nvme_files['nvme-cmb.img']=5G 00:03:32.142 + nvme_files['nvme-multi0.img']=4G 00:03:32.142 + nvme_files['nvme-multi1.img']=4G 00:03:32.142 + nvme_files['nvme-multi2.img']=4G 00:03:32.142 + nvme_files['nvme-openstack.img']=8G 00:03:32.142 + nvme_files['nvme-zns.img']=5G 00:03:32.142 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:32.142 + (( SPDK_TEST_FTL == 1 )) 00:03:32.142 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:32.142 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:32.142 + for nvme in "${!nvme_files[@]}" 00:03:32.142 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:03:32.142 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:32.142 + for nvme in "${!nvme_files[@]}" 00:03:32.142 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:03:32.142 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:32.142 + for nvme in "${!nvme_files[@]}" 00:03:32.142 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:03:32.142 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:32.142 + for nvme in "${!nvme_files[@]}" 00:03:32.142 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:03:32.142 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:32.142 + for nvme in "${!nvme_files[@]}" 00:03:32.142 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:03:32.142 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:32.142 + for nvme in "${!nvme_files[@]}" 00:03:32.142 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:03:32.401 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:32.401 + for nvme in "${!nvme_files[@]}" 00:03:32.401 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:03:32.401 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:32.401 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:03:32.401 + echo 'End stage prepare_nvme.sh' 00:03:32.401 End stage prepare_nvme.sh 00:03:32.412 [Pipeline] sh 00:03:32.694 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:32.694 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:03:32.694 00:03:32.695 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:03:32.695 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:03:32.695 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:03:32.695 HELP=0 00:03:32.695 DRY_RUN=0 00:03:32.695 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:03:32.695 NVME_DISKS_TYPE=nvme,nvme, 00:03:32.695 NVME_AUTO_CREATE=0 00:03:32.695 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:03:32.695 NVME_CMB=,, 00:03:32.695 NVME_PMR=,, 00:03:32.695 NVME_ZNS=,, 00:03:32.695 NVME_MS=,, 00:03:32.695 NVME_FDP=,, 00:03:32.695 SPDK_VAGRANT_DISTRO=fedora38 00:03:32.695 SPDK_VAGRANT_VMCPU=10 00:03:32.695 SPDK_VAGRANT_VMRAM=12288 00:03:32.695 SPDK_VAGRANT_PROVIDER=libvirt 00:03:32.695 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:32.695 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:32.695 SPDK_OPENSTACK_NETWORK=0 00:03:32.695 VAGRANT_PACKAGE_BOX=0 00:03:32.695 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:32.695 FORCE_DISTRO=true 00:03:32.695 VAGRANT_BOX_VERSION= 00:03:32.695 EXTRA_VAGRANTFILES= 00:03:32.695 NIC_MODEL=virtio 00:03:32.695 00:03:32.695 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:03:32.695 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:03:35.241 Bringing machine 'default' up with 'libvirt' provider... 00:03:35.511 ==> default: Creating image (snapshot of base box volume). 00:03:35.785 ==> default: Creating domain with the following settings... 00:03:35.785 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721915504_2c64ddcf2ba69453b950 00:03:35.785 ==> default: -- Domain type: kvm 00:03:35.785 ==> default: -- Cpus: 10 00:03:35.785 ==> default: -- Feature: acpi 00:03:35.785 ==> default: -- Feature: apic 00:03:35.785 ==> default: -- Feature: pae 00:03:35.785 ==> default: -- Memory: 12288M 00:03:35.785 ==> default: -- Memory Backing: hugepages: 00:03:35.785 ==> default: -- Management MAC: 00:03:35.785 ==> default: -- Loader: 00:03:35.785 ==> default: -- Nvram: 00:03:35.785 ==> default: -- Base box: spdk/fedora38 00:03:35.785 ==> default: -- Storage pool: default 00:03:35.785 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721915504_2c64ddcf2ba69453b950.img (20G) 00:03:35.785 ==> default: -- Volume Cache: default 00:03:35.785 ==> default: -- Kernel: 00:03:35.785 ==> default: -- Initrd: 00:03:35.785 ==> default: -- Graphics Type: vnc 00:03:35.785 ==> default: -- Graphics Port: -1 00:03:35.785 ==> default: -- Graphics IP: 127.0.0.1 00:03:35.785 ==> default: -- Graphics Password: Not defined 00:03:35.785 ==> default: -- Video Type: cirrus 00:03:35.785 ==> default: -- Video VRAM: 9216 00:03:35.785 ==> default: -- Sound Type: 00:03:35.785 ==> default: -- Keymap: en-us 00:03:35.785 ==> default: -- TPM Path: 00:03:35.785 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:35.785 ==> default: -- Command line args: 00:03:35.785 ==> default: -> value=-device, 00:03:35.785 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:35.785 ==> default: -> value=-drive, 00:03:35.785 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:03:35.785 ==> default: -> value=-device, 00:03:35.785 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:35.785 ==> default: -> value=-device, 00:03:35.785 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:35.785 ==> default: -> value=-drive, 00:03:35.785 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:35.785 ==> default: -> value=-device, 00:03:35.785 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:35.785 ==> default: -> value=-drive, 00:03:35.785 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:35.785 ==> default: -> value=-device, 00:03:35.785 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:35.785 ==> default: -> value=-drive, 00:03:35.785 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:35.785 ==> default: -> value=-device, 00:03:35.785 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:36.047 ==> default: Creating shared folders metadata... 00:03:36.047 ==> default: Starting domain. 00:03:37.425 ==> default: Waiting for domain to get an IP address... 00:03:55.531 ==> default: Waiting for SSH to become available... 00:03:55.531 ==> default: Configuring and enabling network interfaces... 00:03:59.716 default: SSH address: 192.168.121.197:22 00:03:59.716 default: SSH username: vagrant 00:03:59.716 default: SSH auth method: private key 00:04:03.000 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:11.117 ==> default: Mounting SSHFS shared folder... 00:04:12.176 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:04:12.176 ==> default: Checking Mount.. 00:04:14.079 ==> default: Folder Successfully Mounted! 00:04:14.079 ==> default: Running provisioner: file... 00:04:14.643 default: ~/.gitconfig => .gitconfig 00:04:15.211 00:04:15.211 SUCCESS! 00:04:15.211 00:04:15.211 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:04:15.211 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:15.211 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:04:15.211 00:04:15.224 [Pipeline] } 00:04:15.248 [Pipeline] // stage 00:04:15.258 [Pipeline] dir 00:04:15.259 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:04:15.261 [Pipeline] { 00:04:15.275 [Pipeline] catchError 00:04:15.277 [Pipeline] { 00:04:15.292 [Pipeline] sh 00:04:15.575 + vagrant ssh-config --host vagrant 00:04:15.575 + sed -ne /^Host/,$p 00:04:15.575 + tee ssh_conf 00:04:18.900 Host vagrant 00:04:18.900 HostName 192.168.121.197 00:04:18.900 User vagrant 00:04:18.900 Port 22 00:04:18.900 UserKnownHostsFile /dev/null 00:04:18.900 StrictHostKeyChecking no 00:04:18.900 PasswordAuthentication no 00:04:18.900 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:04:18.900 IdentitiesOnly yes 00:04:18.900 LogLevel FATAL 00:04:18.900 ForwardAgent yes 00:04:18.900 ForwardX11 yes 00:04:18.900 00:04:18.913 [Pipeline] withEnv 00:04:18.915 [Pipeline] { 00:04:18.930 [Pipeline] sh 00:04:19.210 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:19.210 source /etc/os-release 00:04:19.210 [[ -e /image.version ]] && img=$(< /image.version) 00:04:19.210 # Minimal, systemd-like check. 00:04:19.210 if [[ -e /.dockerenv ]]; then 00:04:19.210 # Clear garbage from the node's name: 00:04:19.210 # agt-er_autotest_547-896 -> autotest_547-896 00:04:19.210 # $HOSTNAME is the actual container id 00:04:19.210 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:19.210 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:19.210 # We can assume this is a mount from a host where container is running, 00:04:19.210 # so fetch its hostname to easily identify the target swarm worker. 00:04:19.210 container="$(< /etc/hostname) ($agent)" 00:04:19.210 else 00:04:19.210 # Fallback 00:04:19.210 container=$agent 00:04:19.210 fi 00:04:19.210 fi 00:04:19.210 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:19.210 00:04:19.481 [Pipeline] } 00:04:19.501 [Pipeline] // withEnv 00:04:19.511 [Pipeline] setCustomBuildProperty 00:04:19.526 [Pipeline] stage 00:04:19.528 [Pipeline] { (Tests) 00:04:19.546 [Pipeline] sh 00:04:19.827 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:20.101 [Pipeline] sh 00:04:20.383 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:20.658 [Pipeline] timeout 00:04:20.659 Timeout set to expire in 30 min 00:04:20.661 [Pipeline] { 00:04:20.679 [Pipeline] sh 00:04:20.964 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:21.533 HEAD is now at 208b98e37 raid: Generic changes to support DIF/DIX for RAID 00:04:21.545 [Pipeline] sh 00:04:21.826 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:22.098 [Pipeline] sh 00:04:22.382 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:22.655 [Pipeline] sh 00:04:22.937 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:04:23.196 ++ readlink -f spdk_repo 00:04:23.196 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:23.196 + [[ -n /home/vagrant/spdk_repo ]] 00:04:23.196 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:23.196 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:23.196 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:23.196 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:23.196 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:23.196 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:04:23.196 + cd /home/vagrant/spdk_repo 00:04:23.196 + source /etc/os-release 00:04:23.196 ++ NAME='Fedora Linux' 00:04:23.196 ++ VERSION='38 (Cloud Edition)' 00:04:23.196 ++ ID=fedora 00:04:23.196 ++ VERSION_ID=38 00:04:23.196 ++ VERSION_CODENAME= 00:04:23.196 ++ PLATFORM_ID=platform:f38 00:04:23.196 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:04:23.196 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:23.196 ++ LOGO=fedora-logo-icon 00:04:23.196 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:04:23.196 ++ HOME_URL=https://fedoraproject.org/ 00:04:23.196 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:04:23.196 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:23.196 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:23.196 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:23.196 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:04:23.196 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:23.196 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:04:23.196 ++ SUPPORT_END=2024-05-14 00:04:23.196 ++ VARIANT='Cloud Edition' 00:04:23.196 ++ VARIANT_ID=cloud 00:04:23.196 + uname -a 00:04:23.196 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:04:23.196 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:23.764 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.764 Hugepages 00:04:23.764 node hugesize free / total 00:04:23.764 node0 1048576kB 0 / 0 00:04:23.764 node0 2048kB 0 / 0 00:04:23.764 00:04:23.764 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:23.764 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:23.764 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:23.764 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:23.764 + rm -f /tmp/spdk-ld-path 00:04:23.764 + source autorun-spdk.conf 00:04:23.764 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:23.764 ++ SPDK_TEST_NVMF=1 00:04:23.764 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:23.764 ++ SPDK_TEST_URING=1 00:04:23.764 ++ SPDK_TEST_USDT=1 00:04:23.764 ++ SPDK_RUN_UBSAN=1 00:04:23.764 ++ NET_TYPE=virt 00:04:23.764 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:23.764 ++ RUN_NIGHTLY=0 00:04:23.764 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:23.764 + [[ -n '' ]] 00:04:23.764 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:23.764 + for M in /var/spdk/build-*-manifest.txt 00:04:23.764 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:23.764 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:23.764 + for M in /var/spdk/build-*-manifest.txt 00:04:23.764 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:23.764 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:23.764 ++ uname 00:04:23.764 + [[ Linux == \L\i\n\u\x ]] 00:04:23.764 + sudo dmesg -T 00:04:23.764 + sudo dmesg --clear 00:04:24.023 + dmesg_pid=5324 00:04:24.023 + [[ Fedora Linux == FreeBSD ]] 00:04:24.023 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:24.023 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:24.023 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:24.023 + sudo dmesg -Tw 00:04:24.023 + [[ -x /usr/src/fio-static/fio ]] 00:04:24.023 + export FIO_BIN=/usr/src/fio-static/fio 00:04:24.023 + FIO_BIN=/usr/src/fio-static/fio 00:04:24.023 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:24.023 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:24.023 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:24.023 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:24.023 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:24.023 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:24.023 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:24.023 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:24.023 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:24.024 Test configuration: 00:04:24.024 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:24.024 SPDK_TEST_NVMF=1 00:04:24.024 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:24.024 SPDK_TEST_URING=1 00:04:24.024 SPDK_TEST_USDT=1 00:04:24.024 SPDK_RUN_UBSAN=1 00:04:24.024 NET_TYPE=virt 00:04:24.024 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:24.024 RUN_NIGHTLY=0 13:52:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.024 13:52:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:24.024 13:52:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.024 13:52:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.024 13:52:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.024 13:52:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.024 13:52:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.024 13:52:33 -- paths/export.sh@5 -- $ export PATH 00:04:24.024 13:52:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.024 13:52:33 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:24.024 13:52:33 -- common/autobuild_common.sh@447 -- $ date +%s 00:04:24.024 13:52:33 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721915553.XXXXXX 00:04:24.024 13:52:33 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721915553.3VNjms 00:04:24.024 13:52:33 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:04:24.024 13:52:33 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:04:24.024 13:52:33 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:24.024 13:52:33 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:24.024 13:52:33 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:24.024 13:52:33 -- common/autobuild_common.sh@463 -- $ get_config_params 00:04:24.024 13:52:33 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:04:24.024 13:52:33 -- common/autotest_common.sh@10 -- $ set +x 00:04:24.024 13:52:33 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:04:24.024 13:52:33 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:04:24.024 13:52:33 -- pm/common@17 -- $ local monitor 00:04:24.024 13:52:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.024 13:52:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.024 13:52:33 -- pm/common@25 -- $ sleep 1 00:04:24.024 13:52:33 -- pm/common@21 -- $ date +%s 00:04:24.024 13:52:33 -- pm/common@21 -- $ date +%s 00:04:24.024 13:52:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721915553 00:04:24.024 13:52:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721915553 00:04:24.024 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721915553_collect-vmstat.pm.log 00:04:24.024 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721915553_collect-cpu-load.pm.log 00:04:24.959 13:52:34 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:04:24.959 13:52:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:24.959 13:52:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:24.959 13:52:34 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:24.959 13:52:34 -- spdk/autobuild.sh@16 -- $ date -u 00:04:24.959 Thu Jul 25 01:52:34 PM UTC 2024 00:04:24.959 13:52:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:25.217 v24.09-pre-322-g208b98e37 00:04:25.217 13:52:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:25.217 13:52:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:25.217 13:52:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:25.217 13:52:34 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:25.217 13:52:34 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:25.217 13:52:34 -- common/autotest_common.sh@10 -- $ set +x 00:04:25.217 ************************************ 00:04:25.217 START TEST ubsan 00:04:25.217 ************************************ 00:04:25.217 using ubsan 00:04:25.217 13:52:34 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:04:25.217 00:04:25.218 real 0m0.001s 00:04:25.218 user 0m0.001s 00:04:25.218 sys 0m0.000s 00:04:25.218 13:52:34 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:25.218 13:52:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:25.218 ************************************ 00:04:25.218 END TEST ubsan 00:04:25.218 ************************************ 00:04:25.218 13:52:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:25.218 13:52:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:25.218 13:52:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:25.218 13:52:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:25.218 13:52:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:25.218 13:52:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:25.218 13:52:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:25.218 13:52:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:25.218 13:52:34 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:04:25.218 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:25.218 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:25.786 Using 'verbs' RDMA provider 00:04:42.033 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:56.915 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:56.915 Creating mk/config.mk...done. 00:04:56.915 Creating mk/cc.flags.mk...done. 00:04:56.915 Type 'make' to build. 00:04:56.915 13:53:05 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:56.915 13:53:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:56.915 13:53:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:56.915 13:53:05 -- common/autotest_common.sh@10 -- $ set +x 00:04:56.915 ************************************ 00:04:56.915 START TEST make 00:04:56.915 ************************************ 00:04:56.915 13:53:05 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:56.915 make[1]: Nothing to be done for 'all'. 00:05:09.135 The Meson build system 00:05:09.135 Version: 1.3.1 00:05:09.135 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:09.135 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:09.135 Build type: native build 00:05:09.135 Program cat found: YES (/usr/bin/cat) 00:05:09.135 Project name: DPDK 00:05:09.135 Project version: 24.03.0 00:05:09.135 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:05:09.135 C linker for the host machine: cc ld.bfd 2.39-16 00:05:09.135 Host machine cpu family: x86_64 00:05:09.135 Host machine cpu: x86_64 00:05:09.135 Message: ## Building in Developer Mode ## 00:05:09.135 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:09.135 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:09.135 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:09.135 Program python3 found: YES (/usr/bin/python3) 00:05:09.135 Program cat found: YES (/usr/bin/cat) 00:05:09.135 Compiler for C supports arguments -march=native: YES 00:05:09.135 Checking for size of "void *" : 8 00:05:09.135 Checking for size of "void *" : 8 (cached) 00:05:09.135 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:05:09.135 Library m found: YES 00:05:09.135 Library numa found: YES 00:05:09.135 Has header "numaif.h" : YES 00:05:09.135 Library fdt found: NO 00:05:09.135 Library execinfo found: NO 00:05:09.135 Has header "execinfo.h" : YES 00:05:09.135 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:09.135 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:09.135 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:09.135 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:09.135 Run-time dependency openssl found: YES 3.0.9 00:05:09.135 Run-time dependency libpcap found: YES 1.10.4 00:05:09.135 Has header "pcap.h" with dependency libpcap: YES 00:05:09.135 Compiler for C supports arguments -Wcast-qual: YES 00:05:09.135 Compiler for C supports arguments -Wdeprecated: YES 00:05:09.135 Compiler for C supports arguments -Wformat: YES 00:05:09.135 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:09.135 Compiler for C supports arguments -Wformat-security: NO 00:05:09.135 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:09.135 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:09.135 Compiler for C supports arguments -Wnested-externs: YES 00:05:09.135 Compiler for C supports arguments -Wold-style-definition: YES 00:05:09.135 Compiler for C supports arguments -Wpointer-arith: YES 00:05:09.135 Compiler for C supports arguments -Wsign-compare: YES 00:05:09.135 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:09.135 Compiler for C supports arguments -Wundef: YES 00:05:09.135 Compiler for C supports arguments -Wwrite-strings: YES 00:05:09.135 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:09.135 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:09.135 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:09.135 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:09.135 Program objdump found: YES (/usr/bin/objdump) 00:05:09.135 Compiler for C supports arguments -mavx512f: YES 00:05:09.135 Checking if "AVX512 checking" compiles: YES 00:05:09.135 Fetching value of define "__SSE4_2__" : 1 00:05:09.135 Fetching value of define "__AES__" : 1 00:05:09.135 Fetching value of define "__AVX__" : 1 00:05:09.135 Fetching value of define "__AVX2__" : 1 00:05:09.135 Fetching value of define "__AVX512BW__" : 1 00:05:09.135 Fetching value of define "__AVX512CD__" : 1 00:05:09.135 Fetching value of define "__AVX512DQ__" : 1 00:05:09.135 Fetching value of define "__AVX512F__" : 1 00:05:09.135 Fetching value of define "__AVX512VL__" : 1 00:05:09.135 Fetching value of define "__PCLMUL__" : 1 00:05:09.135 Fetching value of define "__RDRND__" : 1 00:05:09.135 Fetching value of define "__RDSEED__" : 1 00:05:09.135 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:09.135 Fetching value of define "__znver1__" : (undefined) 00:05:09.135 Fetching value of define "__znver2__" : (undefined) 00:05:09.135 Fetching value of define "__znver3__" : (undefined) 00:05:09.135 Fetching value of define "__znver4__" : (undefined) 00:05:09.135 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:09.135 Message: lib/log: Defining dependency "log" 00:05:09.135 Message: lib/kvargs: Defining dependency "kvargs" 00:05:09.135 Message: lib/telemetry: Defining dependency "telemetry" 00:05:09.135 Checking for function "getentropy" : NO 00:05:09.135 Message: lib/eal: Defining dependency "eal" 00:05:09.135 Message: lib/ring: Defining dependency "ring" 00:05:09.135 Message: lib/rcu: Defining dependency "rcu" 00:05:09.135 Message: lib/mempool: Defining dependency "mempool" 00:05:09.135 Message: lib/mbuf: Defining dependency "mbuf" 00:05:09.135 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:09.135 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:09.135 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:09.135 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:09.135 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:09.135 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:09.135 Compiler for C supports arguments -mpclmul: YES 00:05:09.135 Compiler for C supports arguments -maes: YES 00:05:09.135 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:09.135 Compiler for C supports arguments -mavx512bw: YES 00:05:09.135 Compiler for C supports arguments -mavx512dq: YES 00:05:09.135 Compiler for C supports arguments -mavx512vl: YES 00:05:09.135 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:09.135 Compiler for C supports arguments -mavx2: YES 00:05:09.135 Compiler for C supports arguments -mavx: YES 00:05:09.135 Message: lib/net: Defining dependency "net" 00:05:09.135 Message: lib/meter: Defining dependency "meter" 00:05:09.135 Message: lib/ethdev: Defining dependency "ethdev" 00:05:09.135 Message: lib/pci: Defining dependency "pci" 00:05:09.135 Message: lib/cmdline: Defining dependency "cmdline" 00:05:09.135 Message: lib/hash: Defining dependency "hash" 00:05:09.135 Message: lib/timer: Defining dependency "timer" 00:05:09.135 Message: lib/compressdev: Defining dependency "compressdev" 00:05:09.135 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:09.135 Message: lib/dmadev: Defining dependency "dmadev" 00:05:09.135 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:09.135 Message: lib/power: Defining dependency "power" 00:05:09.135 Message: lib/reorder: Defining dependency "reorder" 00:05:09.135 Message: lib/security: Defining dependency "security" 00:05:09.135 Has header "linux/userfaultfd.h" : YES 00:05:09.135 Has header "linux/vduse.h" : YES 00:05:09.135 Message: lib/vhost: Defining dependency "vhost" 00:05:09.135 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:09.135 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:09.135 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:09.135 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:09.135 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:09.135 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:09.135 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:09.135 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:09.135 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:09.135 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:09.135 Program doxygen found: YES (/usr/bin/doxygen) 00:05:09.135 Configuring doxy-api-html.conf using configuration 00:05:09.135 Configuring doxy-api-man.conf using configuration 00:05:09.135 Program mandb found: YES (/usr/bin/mandb) 00:05:09.135 Program sphinx-build found: NO 00:05:09.135 Configuring rte_build_config.h using configuration 00:05:09.135 Message: 00:05:09.135 ================= 00:05:09.135 Applications Enabled 00:05:09.135 ================= 00:05:09.135 00:05:09.135 apps: 00:05:09.135 00:05:09.135 00:05:09.135 Message: 00:05:09.135 ================= 00:05:09.135 Libraries Enabled 00:05:09.135 ================= 00:05:09.135 00:05:09.135 libs: 00:05:09.135 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:09.135 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:09.135 cryptodev, dmadev, power, reorder, security, vhost, 00:05:09.135 00:05:09.135 Message: 00:05:09.135 =============== 00:05:09.135 Drivers Enabled 00:05:09.135 =============== 00:05:09.135 00:05:09.135 common: 00:05:09.135 00:05:09.135 bus: 00:05:09.135 pci, vdev, 00:05:09.135 mempool: 00:05:09.135 ring, 00:05:09.135 dma: 00:05:09.135 00:05:09.135 net: 00:05:09.135 00:05:09.135 crypto: 00:05:09.135 00:05:09.135 compress: 00:05:09.135 00:05:09.135 vdpa: 00:05:09.135 00:05:09.135 00:05:09.135 Message: 00:05:09.135 ================= 00:05:09.135 Content Skipped 00:05:09.135 ================= 00:05:09.135 00:05:09.135 apps: 00:05:09.135 dumpcap: explicitly disabled via build config 00:05:09.135 graph: explicitly disabled via build config 00:05:09.135 pdump: explicitly disabled via build config 00:05:09.135 proc-info: explicitly disabled via build config 00:05:09.135 test-acl: explicitly disabled via build config 00:05:09.135 test-bbdev: explicitly disabled via build config 00:05:09.135 test-cmdline: explicitly disabled via build config 00:05:09.135 test-compress-perf: explicitly disabled via build config 00:05:09.135 test-crypto-perf: explicitly disabled via build config 00:05:09.135 test-dma-perf: explicitly disabled via build config 00:05:09.136 test-eventdev: explicitly disabled via build config 00:05:09.136 test-fib: explicitly disabled via build config 00:05:09.136 test-flow-perf: explicitly disabled via build config 00:05:09.136 test-gpudev: explicitly disabled via build config 00:05:09.136 test-mldev: explicitly disabled via build config 00:05:09.136 test-pipeline: explicitly disabled via build config 00:05:09.136 test-pmd: explicitly disabled via build config 00:05:09.136 test-regex: explicitly disabled via build config 00:05:09.136 test-sad: explicitly disabled via build config 00:05:09.136 test-security-perf: explicitly disabled via build config 00:05:09.136 00:05:09.136 libs: 00:05:09.136 argparse: explicitly disabled via build config 00:05:09.136 metrics: explicitly disabled via build config 00:05:09.136 acl: explicitly disabled via build config 00:05:09.136 bbdev: explicitly disabled via build config 00:05:09.136 bitratestats: explicitly disabled via build config 00:05:09.136 bpf: explicitly disabled via build config 00:05:09.136 cfgfile: explicitly disabled via build config 00:05:09.136 distributor: explicitly disabled via build config 00:05:09.136 efd: explicitly disabled via build config 00:05:09.136 eventdev: explicitly disabled via build config 00:05:09.136 dispatcher: explicitly disabled via build config 00:05:09.136 gpudev: explicitly disabled via build config 00:05:09.136 gro: explicitly disabled via build config 00:05:09.136 gso: explicitly disabled via build config 00:05:09.136 ip_frag: explicitly disabled via build config 00:05:09.136 jobstats: explicitly disabled via build config 00:05:09.136 latencystats: explicitly disabled via build config 00:05:09.136 lpm: explicitly disabled via build config 00:05:09.136 member: explicitly disabled via build config 00:05:09.136 pcapng: explicitly disabled via build config 00:05:09.136 rawdev: explicitly disabled via build config 00:05:09.136 regexdev: explicitly disabled via build config 00:05:09.136 mldev: explicitly disabled via build config 00:05:09.136 rib: explicitly disabled via build config 00:05:09.136 sched: explicitly disabled via build config 00:05:09.136 stack: explicitly disabled via build config 00:05:09.136 ipsec: explicitly disabled via build config 00:05:09.136 pdcp: explicitly disabled via build config 00:05:09.136 fib: explicitly disabled via build config 00:05:09.136 port: explicitly disabled via build config 00:05:09.136 pdump: explicitly disabled via build config 00:05:09.136 table: explicitly disabled via build config 00:05:09.136 pipeline: explicitly disabled via build config 00:05:09.136 graph: explicitly disabled via build config 00:05:09.136 node: explicitly disabled via build config 00:05:09.136 00:05:09.136 drivers: 00:05:09.136 common/cpt: not in enabled drivers build config 00:05:09.136 common/dpaax: not in enabled drivers build config 00:05:09.136 common/iavf: not in enabled drivers build config 00:05:09.136 common/idpf: not in enabled drivers build config 00:05:09.136 common/ionic: not in enabled drivers build config 00:05:09.136 common/mvep: not in enabled drivers build config 00:05:09.136 common/octeontx: not in enabled drivers build config 00:05:09.136 bus/auxiliary: not in enabled drivers build config 00:05:09.136 bus/cdx: not in enabled drivers build config 00:05:09.136 bus/dpaa: not in enabled drivers build config 00:05:09.136 bus/fslmc: not in enabled drivers build config 00:05:09.136 bus/ifpga: not in enabled drivers build config 00:05:09.136 bus/platform: not in enabled drivers build config 00:05:09.136 bus/uacce: not in enabled drivers build config 00:05:09.136 bus/vmbus: not in enabled drivers build config 00:05:09.136 common/cnxk: not in enabled drivers build config 00:05:09.136 common/mlx5: not in enabled drivers build config 00:05:09.136 common/nfp: not in enabled drivers build config 00:05:09.136 common/nitrox: not in enabled drivers build config 00:05:09.136 common/qat: not in enabled drivers build config 00:05:09.136 common/sfc_efx: not in enabled drivers build config 00:05:09.136 mempool/bucket: not in enabled drivers build config 00:05:09.136 mempool/cnxk: not in enabled drivers build config 00:05:09.136 mempool/dpaa: not in enabled drivers build config 00:05:09.136 mempool/dpaa2: not in enabled drivers build config 00:05:09.136 mempool/octeontx: not in enabled drivers build config 00:05:09.136 mempool/stack: not in enabled drivers build config 00:05:09.136 dma/cnxk: not in enabled drivers build config 00:05:09.136 dma/dpaa: not in enabled drivers build config 00:05:09.136 dma/dpaa2: not in enabled drivers build config 00:05:09.136 dma/hisilicon: not in enabled drivers build config 00:05:09.136 dma/idxd: not in enabled drivers build config 00:05:09.136 dma/ioat: not in enabled drivers build config 00:05:09.136 dma/skeleton: not in enabled drivers build config 00:05:09.136 net/af_packet: not in enabled drivers build config 00:05:09.136 net/af_xdp: not in enabled drivers build config 00:05:09.136 net/ark: not in enabled drivers build config 00:05:09.136 net/atlantic: not in enabled drivers build config 00:05:09.136 net/avp: not in enabled drivers build config 00:05:09.136 net/axgbe: not in enabled drivers build config 00:05:09.136 net/bnx2x: not in enabled drivers build config 00:05:09.136 net/bnxt: not in enabled drivers build config 00:05:09.136 net/bonding: not in enabled drivers build config 00:05:09.136 net/cnxk: not in enabled drivers build config 00:05:09.136 net/cpfl: not in enabled drivers build config 00:05:09.136 net/cxgbe: not in enabled drivers build config 00:05:09.136 net/dpaa: not in enabled drivers build config 00:05:09.136 net/dpaa2: not in enabled drivers build config 00:05:09.136 net/e1000: not in enabled drivers build config 00:05:09.136 net/ena: not in enabled drivers build config 00:05:09.136 net/enetc: not in enabled drivers build config 00:05:09.136 net/enetfec: not in enabled drivers build config 00:05:09.136 net/enic: not in enabled drivers build config 00:05:09.136 net/failsafe: not in enabled drivers build config 00:05:09.136 net/fm10k: not in enabled drivers build config 00:05:09.136 net/gve: not in enabled drivers build config 00:05:09.136 net/hinic: not in enabled drivers build config 00:05:09.136 net/hns3: not in enabled drivers build config 00:05:09.136 net/i40e: not in enabled drivers build config 00:05:09.136 net/iavf: not in enabled drivers build config 00:05:09.136 net/ice: not in enabled drivers build config 00:05:09.136 net/idpf: not in enabled drivers build config 00:05:09.136 net/igc: not in enabled drivers build config 00:05:09.136 net/ionic: not in enabled drivers build config 00:05:09.136 net/ipn3ke: not in enabled drivers build config 00:05:09.136 net/ixgbe: not in enabled drivers build config 00:05:09.136 net/mana: not in enabled drivers build config 00:05:09.136 net/memif: not in enabled drivers build config 00:05:09.136 net/mlx4: not in enabled drivers build config 00:05:09.136 net/mlx5: not in enabled drivers build config 00:05:09.136 net/mvneta: not in enabled drivers build config 00:05:09.136 net/mvpp2: not in enabled drivers build config 00:05:09.136 net/netvsc: not in enabled drivers build config 00:05:09.136 net/nfb: not in enabled drivers build config 00:05:09.136 net/nfp: not in enabled drivers build config 00:05:09.136 net/ngbe: not in enabled drivers build config 00:05:09.136 net/null: not in enabled drivers build config 00:05:09.136 net/octeontx: not in enabled drivers build config 00:05:09.136 net/octeon_ep: not in enabled drivers build config 00:05:09.136 net/pcap: not in enabled drivers build config 00:05:09.136 net/pfe: not in enabled drivers build config 00:05:09.136 net/qede: not in enabled drivers build config 00:05:09.136 net/ring: not in enabled drivers build config 00:05:09.136 net/sfc: not in enabled drivers build config 00:05:09.136 net/softnic: not in enabled drivers build config 00:05:09.136 net/tap: not in enabled drivers build config 00:05:09.136 net/thunderx: not in enabled drivers build config 00:05:09.136 net/txgbe: not in enabled drivers build config 00:05:09.136 net/vdev_netvsc: not in enabled drivers build config 00:05:09.136 net/vhost: not in enabled drivers build config 00:05:09.136 net/virtio: not in enabled drivers build config 00:05:09.136 net/vmxnet3: not in enabled drivers build config 00:05:09.136 raw/*: missing internal dependency, "rawdev" 00:05:09.136 crypto/armv8: not in enabled drivers build config 00:05:09.136 crypto/bcmfs: not in enabled drivers build config 00:05:09.136 crypto/caam_jr: not in enabled drivers build config 00:05:09.136 crypto/ccp: not in enabled drivers build config 00:05:09.136 crypto/cnxk: not in enabled drivers build config 00:05:09.136 crypto/dpaa_sec: not in enabled drivers build config 00:05:09.136 crypto/dpaa2_sec: not in enabled drivers build config 00:05:09.136 crypto/ipsec_mb: not in enabled drivers build config 00:05:09.136 crypto/mlx5: not in enabled drivers build config 00:05:09.136 crypto/mvsam: not in enabled drivers build config 00:05:09.136 crypto/nitrox: not in enabled drivers build config 00:05:09.136 crypto/null: not in enabled drivers build config 00:05:09.136 crypto/octeontx: not in enabled drivers build config 00:05:09.136 crypto/openssl: not in enabled drivers build config 00:05:09.136 crypto/scheduler: not in enabled drivers build config 00:05:09.136 crypto/uadk: not in enabled drivers build config 00:05:09.136 crypto/virtio: not in enabled drivers build config 00:05:09.136 compress/isal: not in enabled drivers build config 00:05:09.136 compress/mlx5: not in enabled drivers build config 00:05:09.136 compress/nitrox: not in enabled drivers build config 00:05:09.136 compress/octeontx: not in enabled drivers build config 00:05:09.136 compress/zlib: not in enabled drivers build config 00:05:09.136 regex/*: missing internal dependency, "regexdev" 00:05:09.136 ml/*: missing internal dependency, "mldev" 00:05:09.136 vdpa/ifc: not in enabled drivers build config 00:05:09.136 vdpa/mlx5: not in enabled drivers build config 00:05:09.136 vdpa/nfp: not in enabled drivers build config 00:05:09.136 vdpa/sfc: not in enabled drivers build config 00:05:09.136 event/*: missing internal dependency, "eventdev" 00:05:09.136 baseband/*: missing internal dependency, "bbdev" 00:05:09.136 gpu/*: missing internal dependency, "gpudev" 00:05:09.136 00:05:09.136 00:05:09.136 Build targets in project: 85 00:05:09.136 00:05:09.136 DPDK 24.03.0 00:05:09.136 00:05:09.136 User defined options 00:05:09.136 buildtype : debug 00:05:09.136 default_library : shared 00:05:09.137 libdir : lib 00:05:09.137 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:09.137 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:09.137 c_link_args : 00:05:09.137 cpu_instruction_set: native 00:05:09.137 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:09.137 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:09.137 enable_docs : false 00:05:09.137 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:09.137 enable_kmods : false 00:05:09.137 max_lcores : 128 00:05:09.137 tests : false 00:05:09.137 00:05:09.137 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:09.137 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:09.137 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:09.137 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:09.137 [3/268] Linking static target lib/librte_kvargs.a 00:05:09.137 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:09.137 [5/268] Linking static target lib/librte_log.a 00:05:09.137 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:09.137 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.137 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:09.137 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:09.137 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:09.137 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:09.137 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:09.137 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:09.137 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:09.137 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:09.137 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:09.137 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:09.137 [18/268] Linking static target lib/librte_telemetry.a 00:05:09.137 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.137 [20/268] Linking target lib/librte_log.so.24.1 00:05:09.137 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:09.137 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:09.396 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:09.396 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:09.396 [25/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:09.396 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:09.396 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:09.396 [28/268] Linking target lib/librte_kvargs.so.24.1 00:05:09.655 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:09.655 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:09.655 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:09.655 [32/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:09.655 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:09.915 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.915 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:09.915 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:09.915 [37/268] Linking target lib/librte_telemetry.so.24.1 00:05:09.915 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:09.915 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:09.915 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:09.915 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:10.174 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:10.174 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:10.174 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:10.174 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:10.174 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:10.174 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:10.433 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:10.433 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:10.433 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:10.692 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:10.692 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:10.692 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:10.692 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:10.692 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:10.950 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:10.950 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:10.950 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:10.950 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:10.950 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:10.950 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:11.208 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:11.208 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:11.467 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:11.468 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:11.468 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:11.468 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:11.468 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:11.728 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:11.728 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:11.728 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:11.728 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:11.728 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:11.728 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:11.728 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:11.988 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:11.988 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:11.988 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:12.248 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:12.248 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:12.248 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:12.248 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:12.248 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:12.507 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:12.507 [85/268] Linking static target lib/librte_ring.a 00:05:12.507 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:12.507 [87/268] Linking static target lib/librte_eal.a 00:05:12.770 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:12.770 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:12.770 [90/268] Linking static target lib/librte_rcu.a 00:05:12.770 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:12.770 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:12.770 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:12.770 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:13.032 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:13.033 [96/268] Linking static target lib/librte_mempool.a 00:05:13.033 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.033 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:13.292 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.292 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:13.292 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:13.292 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:13.292 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:13.292 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:13.550 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:13.550 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:13.550 [107/268] Linking static target lib/librte_mbuf.a 00:05:13.550 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:13.550 [109/268] Linking static target lib/librte_net.a 00:05:13.808 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:13.809 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:13.809 [112/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:13.809 [113/268] Linking static target lib/librte_meter.a 00:05:13.809 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:14.067 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:14.067 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.067 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.067 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.324 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:14.324 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:14.582 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.582 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:14.840 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:14.840 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:14.840 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:14.840 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:14.840 [127/268] Linking static target lib/librte_pci.a 00:05:15.098 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:15.099 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:15.099 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:15.099 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:15.099 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:15.099 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:15.099 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:15.099 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:15.099 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:15.357 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:15.357 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:15.357 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:15.357 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:15.357 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:15.357 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:15.357 [143/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.357 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:15.357 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:15.614 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:15.614 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:15.614 [148/268] Linking static target lib/librte_ethdev.a 00:05:15.614 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:15.614 [150/268] Linking static target lib/librte_cmdline.a 00:05:15.873 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:15.873 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:15.873 [153/268] Linking static target lib/librte_timer.a 00:05:15.873 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:15.873 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:16.176 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:16.176 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:16.176 [158/268] Linking static target lib/librte_hash.a 00:05:16.176 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:16.434 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:16.434 [161/268] Linking static target lib/librte_compressdev.a 00:05:16.434 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:16.434 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.434 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:16.434 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:16.693 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:16.693 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:16.693 [168/268] Linking static target lib/librte_dmadev.a 00:05:16.950 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:16.950 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:16.950 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:16.950 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:17.208 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:17.208 [174/268] Linking static target lib/librte_cryptodev.a 00:05:17.208 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.208 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:17.465 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.465 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:17.465 [179/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.465 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:17.723 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.723 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:17.723 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:17.723 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:17.981 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:17.981 [186/268] Linking static target lib/librte_power.a 00:05:17.981 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:17.981 [188/268] Linking static target lib/librte_security.a 00:05:18.239 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:18.239 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:18.239 [191/268] Linking static target lib/librte_reorder.a 00:05:18.239 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:18.239 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:18.496 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:18.755 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.755 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.012 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:19.012 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:19.012 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.012 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:19.270 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:19.270 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:19.528 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:19.528 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:19.528 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:19.528 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.528 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:19.785 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:19.785 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:19.785 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:19.785 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:19.785 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:20.043 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:20.043 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:20.043 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:20.043 [216/268] Linking static target drivers/librte_bus_vdev.a 00:05:20.043 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:20.043 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:20.043 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:20.043 [220/268] Linking static target drivers/librte_bus_pci.a 00:05:20.043 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:20.300 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:20.300 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.300 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:20.300 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:20.300 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:20.300 [227/268] Linking static target drivers/librte_mempool_ring.a 00:05:20.558 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.124 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:21.124 [230/268] Linking static target lib/librte_vhost.a 00:05:23.023 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.023 [232/268] Linking target lib/librte_eal.so.24.1 00:05:23.023 [233/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.023 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:23.023 [235/268] Linking target lib/librte_meter.so.24.1 00:05:23.023 [236/268] Linking target lib/librte_dmadev.so.24.1 00:05:23.023 [237/268] Linking target lib/librte_pci.so.24.1 00:05:23.023 [238/268] Linking target lib/librte_timer.so.24.1 00:05:23.023 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:23.023 [240/268] Linking target lib/librte_ring.so.24.1 00:05:23.281 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:23.281 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:23.281 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:23.281 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:23.281 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:23.281 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:23.281 [247/268] Linking target lib/librte_mempool.so.24.1 00:05:23.281 [248/268] Linking target lib/librte_rcu.so.24.1 00:05:23.539 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:23.539 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:23.539 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:23.539 [252/268] Linking target lib/librte_mbuf.so.24.1 00:05:23.539 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:23.539 [254/268] Linking target lib/librte_net.so.24.1 00:05:23.539 [255/268] Linking target lib/librte_compressdev.so.24.1 00:05:23.539 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:05:23.539 [257/268] Linking target lib/librte_reorder.so.24.1 00:05:23.797 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:23.797 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:23.797 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.797 [261/268] Linking target lib/librte_cmdline.so.24.1 00:05:23.797 [262/268] Linking target lib/librte_security.so.24.1 00:05:23.797 [263/268] Linking target lib/librte_hash.so.24.1 00:05:23.797 [264/268] Linking target lib/librte_ethdev.so.24.1 00:05:24.055 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:24.055 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:24.055 [267/268] Linking target lib/librte_power.so.24.1 00:05:24.055 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:24.055 INFO: autodetecting backend as ninja 00:05:24.055 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:25.457 CC lib/ut_mock/mock.o 00:05:25.457 CC lib/log/log.o 00:05:25.457 CC lib/log/log_deprecated.o 00:05:25.457 CC lib/log/log_flags.o 00:05:25.457 CC lib/ut/ut.o 00:05:25.457 LIB libspdk_ut_mock.a 00:05:25.457 LIB libspdk_log.a 00:05:25.457 LIB libspdk_ut.a 00:05:25.457 SO libspdk_ut_mock.so.6.0 00:05:25.457 SO libspdk_ut.so.2.0 00:05:25.457 SO libspdk_log.so.7.0 00:05:25.457 SYMLINK libspdk_ut_mock.so 00:05:25.457 SYMLINK libspdk_ut.so 00:05:25.457 SYMLINK libspdk_log.so 00:05:25.716 CXX lib/trace_parser/trace.o 00:05:25.716 CC lib/dma/dma.o 00:05:25.716 CC lib/ioat/ioat.o 00:05:25.716 CC lib/util/base64.o 00:05:25.716 CC lib/util/bit_array.o 00:05:25.716 CC lib/util/cpuset.o 00:05:25.716 CC lib/util/crc16.o 00:05:25.716 CC lib/util/crc32.o 00:05:25.716 CC lib/util/crc32c.o 00:05:25.716 CC lib/vfio_user/host/vfio_user_pci.o 00:05:25.975 CC lib/util/crc32_ieee.o 00:05:25.975 CC lib/util/crc64.o 00:05:25.975 CC lib/util/dif.o 00:05:25.975 CC lib/util/fd.o 00:05:25.975 CC lib/vfio_user/host/vfio_user.o 00:05:25.975 CC lib/util/fd_group.o 00:05:25.975 CC lib/util/file.o 00:05:25.975 LIB libspdk_dma.a 00:05:25.975 SO libspdk_dma.so.4.0 00:05:25.975 CC lib/util/hexlify.o 00:05:25.975 CC lib/util/iov.o 00:05:26.233 CC lib/util/math.o 00:05:26.233 LIB libspdk_ioat.a 00:05:26.233 SYMLINK libspdk_dma.so 00:05:26.233 LIB libspdk_vfio_user.a 00:05:26.233 CC lib/util/net.o 00:05:26.233 SO libspdk_ioat.so.7.0 00:05:26.233 CC lib/util/pipe.o 00:05:26.233 CC lib/util/strerror_tls.o 00:05:26.233 SO libspdk_vfio_user.so.5.0 00:05:26.233 SYMLINK libspdk_ioat.so 00:05:26.233 CC lib/util/string.o 00:05:26.233 CC lib/util/uuid.o 00:05:26.233 SYMLINK libspdk_vfio_user.so 00:05:26.233 CC lib/util/xor.o 00:05:26.233 CC lib/util/zipf.o 00:05:26.799 LIB libspdk_util.a 00:05:26.799 SO libspdk_util.so.10.0 00:05:26.799 LIB libspdk_trace_parser.a 00:05:27.057 SO libspdk_trace_parser.so.5.0 00:05:27.057 SYMLINK libspdk_util.so 00:05:27.057 SYMLINK libspdk_trace_parser.so 00:05:27.057 CC lib/conf/conf.o 00:05:27.057 CC lib/env_dpdk/env.o 00:05:27.057 CC lib/env_dpdk/memory.o 00:05:27.057 CC lib/env_dpdk/pci.o 00:05:27.057 CC lib/idxd/idxd.o 00:05:27.057 CC lib/env_dpdk/init.o 00:05:27.057 CC lib/json/json_parse.o 00:05:27.057 CC lib/rdma_utils/rdma_utils.o 00:05:27.057 CC lib/rdma_provider/common.o 00:05:27.057 CC lib/vmd/vmd.o 00:05:27.315 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:27.573 LIB libspdk_conf.a 00:05:27.573 CC lib/json/json_util.o 00:05:27.573 LIB libspdk_rdma_provider.a 00:05:27.573 SO libspdk_conf.so.6.0 00:05:27.573 LIB libspdk_rdma_utils.a 00:05:27.573 SO libspdk_rdma_provider.so.6.0 00:05:27.573 SO libspdk_rdma_utils.so.1.0 00:05:27.573 SYMLINK libspdk_conf.so 00:05:27.573 CC lib/json/json_write.o 00:05:27.573 CC lib/vmd/led.o 00:05:27.573 SYMLINK libspdk_rdma_provider.so 00:05:27.573 CC lib/idxd/idxd_user.o 00:05:27.573 SYMLINK libspdk_rdma_utils.so 00:05:27.573 CC lib/idxd/idxd_kernel.o 00:05:27.573 CC lib/env_dpdk/threads.o 00:05:27.831 CC lib/env_dpdk/pci_ioat.o 00:05:27.831 CC lib/env_dpdk/pci_virtio.o 00:05:27.831 CC lib/env_dpdk/pci_vmd.o 00:05:27.831 LIB libspdk_vmd.a 00:05:27.831 CC lib/env_dpdk/pci_idxd.o 00:05:27.831 CC lib/env_dpdk/pci_event.o 00:05:27.831 SO libspdk_vmd.so.6.0 00:05:27.831 CC lib/env_dpdk/sigbus_handler.o 00:05:27.831 CC lib/env_dpdk/pci_dpdk.o 00:05:27.831 LIB libspdk_json.a 00:05:27.831 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:27.831 LIB libspdk_idxd.a 00:05:27.831 SO libspdk_json.so.6.0 00:05:27.831 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:27.831 SYMLINK libspdk_vmd.so 00:05:28.089 SO libspdk_idxd.so.12.0 00:05:28.089 SYMLINK libspdk_json.so 00:05:28.089 SYMLINK libspdk_idxd.so 00:05:28.346 CC lib/jsonrpc/jsonrpc_server.o 00:05:28.346 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:28.346 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:28.346 CC lib/jsonrpc/jsonrpc_client.o 00:05:28.603 LIB libspdk_jsonrpc.a 00:05:28.603 SO libspdk_jsonrpc.so.6.0 00:05:28.603 SYMLINK libspdk_jsonrpc.so 00:05:28.603 LIB libspdk_env_dpdk.a 00:05:28.861 SO libspdk_env_dpdk.so.15.0 00:05:28.861 CC lib/rpc/rpc.o 00:05:28.861 SYMLINK libspdk_env_dpdk.so 00:05:29.118 LIB libspdk_rpc.a 00:05:29.118 SO libspdk_rpc.so.6.0 00:05:29.375 SYMLINK libspdk_rpc.so 00:05:29.375 CC lib/notify/notify.o 00:05:29.375 CC lib/notify/notify_rpc.o 00:05:29.632 CC lib/keyring/keyring_rpc.o 00:05:29.632 CC lib/keyring/keyring.o 00:05:29.632 CC lib/trace/trace_flags.o 00:05:29.632 CC lib/trace/trace.o 00:05:29.632 CC lib/trace/trace_rpc.o 00:05:29.632 LIB libspdk_notify.a 00:05:29.632 SO libspdk_notify.so.6.0 00:05:29.889 LIB libspdk_keyring.a 00:05:29.889 LIB libspdk_trace.a 00:05:29.889 SO libspdk_trace.so.10.0 00:05:29.889 SO libspdk_keyring.so.1.0 00:05:29.889 SYMLINK libspdk_notify.so 00:05:29.889 SYMLINK libspdk_trace.so 00:05:29.889 SYMLINK libspdk_keyring.so 00:05:30.146 CC lib/thread/thread.o 00:05:30.146 CC lib/thread/iobuf.o 00:05:30.146 CC lib/sock/sock_rpc.o 00:05:30.146 CC lib/sock/sock.o 00:05:30.709 LIB libspdk_sock.a 00:05:30.709 SO libspdk_sock.so.10.0 00:05:30.709 SYMLINK libspdk_sock.so 00:05:30.965 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:30.965 CC lib/nvme/nvme_ctrlr.o 00:05:30.965 CC lib/nvme/nvme_ns_cmd.o 00:05:30.965 CC lib/nvme/nvme_ns.o 00:05:30.965 CC lib/nvme/nvme_fabric.o 00:05:30.965 CC lib/nvme/nvme_pcie_common.o 00:05:30.965 CC lib/nvme/nvme_qpair.o 00:05:30.965 CC lib/nvme/nvme.o 00:05:30.965 CC lib/nvme/nvme_pcie.o 00:05:31.896 CC lib/nvme/nvme_quirks.o 00:05:31.896 CC lib/nvme/nvme_transport.o 00:05:31.896 CC lib/nvme/nvme_discovery.o 00:05:31.896 LIB libspdk_thread.a 00:05:31.896 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:31.896 SO libspdk_thread.so.10.1 00:05:32.153 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:32.153 CC lib/nvme/nvme_tcp.o 00:05:32.153 CC lib/nvme/nvme_opal.o 00:05:32.153 SYMLINK libspdk_thread.so 00:05:32.153 CC lib/nvme/nvme_io_msg.o 00:05:32.409 CC lib/nvme/nvme_poll_group.o 00:05:32.409 CC lib/nvme/nvme_zns.o 00:05:32.409 CC lib/nvme/nvme_stubs.o 00:05:32.666 CC lib/nvme/nvme_auth.o 00:05:32.666 CC lib/nvme/nvme_cuse.o 00:05:32.924 CC lib/nvme/nvme_rdma.o 00:05:32.924 CC lib/accel/accel.o 00:05:32.924 CC lib/blob/blobstore.o 00:05:33.182 CC lib/accel/accel_rpc.o 00:05:33.182 CC lib/blob/request.o 00:05:33.440 CC lib/blob/zeroes.o 00:05:33.440 CC lib/init/json_config.o 00:05:33.440 CC lib/virtio/virtio.o 00:05:33.698 CC lib/virtio/virtio_vhost_user.o 00:05:33.698 CC lib/virtio/virtio_vfio_user.o 00:05:33.698 CC lib/blob/blob_bs_dev.o 00:05:33.698 CC lib/virtio/virtio_pci.o 00:05:33.698 CC lib/init/subsystem.o 00:05:33.956 CC lib/init/subsystem_rpc.o 00:05:33.956 CC lib/init/rpc.o 00:05:33.956 CC lib/accel/accel_sw.o 00:05:33.956 LIB libspdk_init.a 00:05:34.231 SO libspdk_init.so.5.0 00:05:34.231 LIB libspdk_virtio.a 00:05:34.231 SYMLINK libspdk_init.so 00:05:34.231 SO libspdk_virtio.so.7.0 00:05:34.231 LIB libspdk_accel.a 00:05:34.231 SYMLINK libspdk_virtio.so 00:05:34.231 SO libspdk_accel.so.16.0 00:05:34.490 SYMLINK libspdk_accel.so 00:05:34.490 LIB libspdk_nvme.a 00:05:34.490 CC lib/event/log_rpc.o 00:05:34.490 CC lib/event/app.o 00:05:34.490 CC lib/event/reactor.o 00:05:34.490 CC lib/event/app_rpc.o 00:05:34.490 CC lib/event/scheduler_static.o 00:05:34.747 SO libspdk_nvme.so.13.1 00:05:34.747 CC lib/bdev/bdev_rpc.o 00:05:34.747 CC lib/bdev/part.o 00:05:34.747 CC lib/bdev/bdev.o 00:05:34.747 CC lib/bdev/bdev_zone.o 00:05:34.747 CC lib/bdev/scsi_nvme.o 00:05:35.004 SYMLINK libspdk_nvme.so 00:05:35.004 LIB libspdk_event.a 00:05:35.004 SO libspdk_event.so.14.0 00:05:35.263 SYMLINK libspdk_event.so 00:05:36.198 LIB libspdk_blob.a 00:05:36.455 SO libspdk_blob.so.11.0 00:05:36.455 SYMLINK libspdk_blob.so 00:05:36.712 CC lib/blobfs/tree.o 00:05:36.712 CC lib/blobfs/blobfs.o 00:05:36.712 CC lib/lvol/lvol.o 00:05:37.696 LIB libspdk_bdev.a 00:05:37.696 SO libspdk_bdev.so.16.0 00:05:37.696 LIB libspdk_blobfs.a 00:05:37.696 LIB libspdk_lvol.a 00:05:37.696 SO libspdk_blobfs.so.10.0 00:05:37.696 SYMLINK libspdk_bdev.so 00:05:37.696 SO libspdk_lvol.so.10.0 00:05:37.974 SYMLINK libspdk_blobfs.so 00:05:37.974 SYMLINK libspdk_lvol.so 00:05:37.974 CC lib/scsi/dev.o 00:05:37.974 CC lib/scsi/port.o 00:05:37.974 CC lib/scsi/lun.o 00:05:37.974 CC lib/scsi/scsi.o 00:05:37.974 CC lib/scsi/scsi_bdev.o 00:05:37.974 CC lib/scsi/scsi_pr.o 00:05:37.974 CC lib/ftl/ftl_core.o 00:05:37.974 CC lib/nvmf/ctrlr.o 00:05:37.974 CC lib/nbd/nbd.o 00:05:37.974 CC lib/ublk/ublk.o 00:05:38.232 CC lib/ublk/ublk_rpc.o 00:05:38.232 CC lib/nbd/nbd_rpc.o 00:05:38.232 CC lib/nvmf/ctrlr_discovery.o 00:05:38.232 CC lib/nvmf/ctrlr_bdev.o 00:05:38.490 CC lib/nvmf/subsystem.o 00:05:38.490 CC lib/scsi/scsi_rpc.o 00:05:38.490 CC lib/nvmf/nvmf.o 00:05:38.490 CC lib/ftl/ftl_init.o 00:05:38.490 LIB libspdk_nbd.a 00:05:38.490 SO libspdk_nbd.so.7.0 00:05:38.748 LIB libspdk_ublk.a 00:05:38.748 CC lib/nvmf/nvmf_rpc.o 00:05:38.748 SO libspdk_ublk.so.3.0 00:05:38.748 CC lib/scsi/task.o 00:05:38.748 SYMLINK libspdk_nbd.so 00:05:38.748 CC lib/nvmf/transport.o 00:05:38.748 SYMLINK libspdk_ublk.so 00:05:38.748 CC lib/ftl/ftl_layout.o 00:05:38.748 CC lib/nvmf/tcp.o 00:05:39.006 CC lib/nvmf/stubs.o 00:05:39.006 LIB libspdk_scsi.a 00:05:39.006 CC lib/ftl/ftl_debug.o 00:05:39.006 SO libspdk_scsi.so.9.0 00:05:39.006 CC lib/nvmf/mdns_server.o 00:05:39.262 SYMLINK libspdk_scsi.so 00:05:39.262 CC lib/nvmf/rdma.o 00:05:39.519 CC lib/ftl/ftl_io.o 00:05:39.519 CC lib/nvmf/auth.o 00:05:39.776 CC lib/ftl/ftl_sb.o 00:05:39.776 CC lib/ftl/ftl_l2p.o 00:05:39.776 CC lib/ftl/ftl_l2p_flat.o 00:05:40.041 CC lib/ftl/ftl_nv_cache.o 00:05:40.041 CC lib/iscsi/conn.o 00:05:40.041 CC lib/iscsi/init_grp.o 00:05:40.041 CC lib/ftl/ftl_band.o 00:05:40.041 CC lib/vhost/vhost.o 00:05:40.310 CC lib/vhost/vhost_rpc.o 00:05:40.573 CC lib/vhost/vhost_scsi.o 00:05:40.573 CC lib/vhost/vhost_blk.o 00:05:40.573 CC lib/vhost/rte_vhost_user.o 00:05:40.845 CC lib/ftl/ftl_band_ops.o 00:05:40.845 CC lib/ftl/ftl_writer.o 00:05:41.103 CC lib/iscsi/iscsi.o 00:05:41.103 CC lib/ftl/ftl_rq.o 00:05:41.103 CC lib/ftl/ftl_reloc.o 00:05:41.361 CC lib/ftl/ftl_l2p_cache.o 00:05:41.361 CC lib/ftl/ftl_p2l.o 00:05:41.361 CC lib/iscsi/md5.o 00:05:41.361 CC lib/ftl/mngt/ftl_mngt.o 00:05:41.618 CC lib/iscsi/param.o 00:05:41.618 CC lib/iscsi/portal_grp.o 00:05:41.618 LIB libspdk_nvmf.a 00:05:41.618 SO libspdk_nvmf.so.19.0 00:05:41.876 CC lib/iscsi/tgt_node.o 00:05:41.876 CC lib/iscsi/iscsi_subsystem.o 00:05:41.876 CC lib/iscsi/iscsi_rpc.o 00:05:41.876 CC lib/iscsi/task.o 00:05:42.134 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:42.134 SYMLINK libspdk_nvmf.so 00:05:42.134 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:42.134 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:42.134 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:42.134 LIB libspdk_vhost.a 00:05:42.392 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:42.392 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:42.392 SO libspdk_vhost.so.8.0 00:05:42.392 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:42.392 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:42.392 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:42.392 SYMLINK libspdk_vhost.so 00:05:42.392 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:42.392 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:42.392 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:42.650 CC lib/ftl/utils/ftl_conf.o 00:05:42.650 CC lib/ftl/utils/ftl_md.o 00:05:42.650 CC lib/ftl/utils/ftl_mempool.o 00:05:42.650 CC lib/ftl/utils/ftl_bitmap.o 00:05:42.907 CC lib/ftl/utils/ftl_property.o 00:05:42.907 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:42.907 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:42.907 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:42.907 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:42.907 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:42.907 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:43.165 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:43.165 LIB libspdk_iscsi.a 00:05:43.165 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:43.165 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:43.165 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:43.165 SO libspdk_iscsi.so.8.0 00:05:43.165 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:43.165 CC lib/ftl/base/ftl_base_dev.o 00:05:43.422 CC lib/ftl/base/ftl_base_bdev.o 00:05:43.422 CC lib/ftl/ftl_trace.o 00:05:43.422 SYMLINK libspdk_iscsi.so 00:05:43.679 LIB libspdk_ftl.a 00:05:43.935 SO libspdk_ftl.so.9.0 00:05:44.195 SYMLINK libspdk_ftl.so 00:05:44.764 CC module/env_dpdk/env_dpdk_rpc.o 00:05:44.764 CC module/sock/uring/uring.o 00:05:44.764 CC module/keyring/linux/keyring.o 00:05:44.764 CC module/accel/ioat/accel_ioat.o 00:05:44.764 CC module/accel/dsa/accel_dsa.o 00:05:44.764 CC module/keyring/file/keyring.o 00:05:44.764 CC module/accel/error/accel_error.o 00:05:44.764 CC module/blob/bdev/blob_bdev.o 00:05:44.764 CC module/sock/posix/posix.o 00:05:44.764 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:44.764 LIB libspdk_env_dpdk_rpc.a 00:05:44.764 SO libspdk_env_dpdk_rpc.so.6.0 00:05:44.764 CC module/keyring/file/keyring_rpc.o 00:05:44.764 SYMLINK libspdk_env_dpdk_rpc.so 00:05:44.764 CC module/keyring/linux/keyring_rpc.o 00:05:45.021 CC module/accel/error/accel_error_rpc.o 00:05:45.021 CC module/accel/ioat/accel_ioat_rpc.o 00:05:45.021 LIB libspdk_keyring_file.a 00:05:45.021 LIB libspdk_keyring_linux.a 00:05:45.021 SO libspdk_keyring_file.so.1.0 00:05:45.021 SO libspdk_keyring_linux.so.1.0 00:05:45.021 LIB libspdk_scheduler_dynamic.a 00:05:45.021 LIB libspdk_accel_ioat.a 00:05:45.021 SYMLINK libspdk_keyring_file.so 00:05:45.021 SYMLINK libspdk_keyring_linux.so 00:05:45.021 SO libspdk_scheduler_dynamic.so.4.0 00:05:45.021 SO libspdk_accel_ioat.so.6.0 00:05:45.021 CC module/accel/dsa/accel_dsa_rpc.o 00:05:45.278 LIB libspdk_accel_error.a 00:05:45.278 SYMLINK libspdk_scheduler_dynamic.so 00:05:45.278 LIB libspdk_blob_bdev.a 00:05:45.278 SO libspdk_accel_error.so.2.0 00:05:45.278 SYMLINK libspdk_accel_ioat.so 00:05:45.278 SO libspdk_blob_bdev.so.11.0 00:05:45.278 SYMLINK libspdk_accel_error.so 00:05:45.278 CC module/accel/iaa/accel_iaa.o 00:05:45.278 CC module/accel/iaa/accel_iaa_rpc.o 00:05:45.278 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:45.278 SYMLINK libspdk_blob_bdev.so 00:05:45.278 LIB libspdk_accel_dsa.a 00:05:45.278 CC module/scheduler/gscheduler/gscheduler.o 00:05:45.278 SO libspdk_accel_dsa.so.5.0 00:05:45.535 LIB libspdk_sock_uring.a 00:05:45.535 SO libspdk_sock_uring.so.5.0 00:05:45.535 SYMLINK libspdk_accel_dsa.so 00:05:45.535 LIB libspdk_sock_posix.a 00:05:45.535 LIB libspdk_accel_iaa.a 00:05:45.535 LIB libspdk_scheduler_dpdk_governor.a 00:05:45.535 SYMLINK libspdk_sock_uring.so 00:05:45.535 SO libspdk_accel_iaa.so.3.0 00:05:45.535 SO libspdk_sock_posix.so.6.0 00:05:45.535 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:45.535 LIB libspdk_scheduler_gscheduler.a 00:05:45.535 CC module/bdev/delay/vbdev_delay.o 00:05:45.535 CC module/bdev/error/vbdev_error.o 00:05:45.792 SO libspdk_scheduler_gscheduler.so.4.0 00:05:45.792 CC module/bdev/gpt/gpt.o 00:05:45.792 CC module/blobfs/bdev/blobfs_bdev.o 00:05:45.792 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:45.792 SYMLINK libspdk_accel_iaa.so 00:05:45.792 SYMLINK libspdk_sock_posix.so 00:05:45.792 CC module/bdev/gpt/vbdev_gpt.o 00:05:45.792 SYMLINK libspdk_scheduler_gscheduler.so 00:05:45.792 CC module/bdev/lvol/vbdev_lvol.o 00:05:45.792 CC module/bdev/malloc/bdev_malloc.o 00:05:46.049 CC module/bdev/nvme/bdev_nvme.o 00:05:46.049 CC module/bdev/null/bdev_null.o 00:05:46.049 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:46.049 CC module/bdev/error/vbdev_error_rpc.o 00:05:46.049 CC module/bdev/null/bdev_null_rpc.o 00:05:46.049 CC module/bdev/passthru/vbdev_passthru.o 00:05:46.049 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:46.049 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:46.049 LIB libspdk_bdev_gpt.a 00:05:46.306 LIB libspdk_bdev_delay.a 00:05:46.306 SO libspdk_bdev_gpt.so.6.0 00:05:46.306 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:46.306 LIB libspdk_blobfs_bdev.a 00:05:46.306 SO libspdk_bdev_delay.so.6.0 00:05:46.306 SO libspdk_blobfs_bdev.so.6.0 00:05:46.306 LIB libspdk_bdev_null.a 00:05:46.306 LIB libspdk_bdev_error.a 00:05:46.306 SYMLINK libspdk_bdev_gpt.so 00:05:46.306 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:46.306 SYMLINK libspdk_bdev_delay.so 00:05:46.306 SO libspdk_bdev_null.so.6.0 00:05:46.306 SO libspdk_bdev_error.so.6.0 00:05:46.306 SYMLINK libspdk_blobfs_bdev.so 00:05:46.306 LIB libspdk_bdev_passthru.a 00:05:46.306 LIB libspdk_bdev_malloc.a 00:05:46.306 SYMLINK libspdk_bdev_null.so 00:05:46.306 SO libspdk_bdev_passthru.so.6.0 00:05:46.306 SYMLINK libspdk_bdev_error.so 00:05:46.565 SO libspdk_bdev_malloc.so.6.0 00:05:46.565 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:46.565 SYMLINK libspdk_bdev_passthru.so 00:05:46.565 CC module/bdev/split/vbdev_split.o 00:05:46.565 CC module/bdev/raid/bdev_raid.o 00:05:46.565 SYMLINK libspdk_bdev_malloc.so 00:05:46.565 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:46.565 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:46.565 CC module/bdev/uring/bdev_uring.o 00:05:46.565 LIB libspdk_bdev_lvol.a 00:05:46.824 CC module/bdev/ftl/bdev_ftl.o 00:05:46.824 CC module/bdev/aio/bdev_aio.o 00:05:46.824 SO libspdk_bdev_lvol.so.6.0 00:05:46.824 CC module/bdev/nvme/nvme_rpc.o 00:05:46.824 SYMLINK libspdk_bdev_lvol.so 00:05:46.824 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:46.824 LIB libspdk_bdev_zone_block.a 00:05:46.824 CC module/bdev/split/vbdev_split_rpc.o 00:05:46.824 SO libspdk_bdev_zone_block.so.6.0 00:05:47.098 CC module/bdev/uring/bdev_uring_rpc.o 00:05:47.098 SYMLINK libspdk_bdev_zone_block.so 00:05:47.098 CC module/bdev/raid/bdev_raid_rpc.o 00:05:47.098 CC module/bdev/aio/bdev_aio_rpc.o 00:05:47.098 LIB libspdk_bdev_ftl.a 00:05:47.098 LIB libspdk_bdev_split.a 00:05:47.098 SO libspdk_bdev_ftl.so.6.0 00:05:47.098 SO libspdk_bdev_split.so.6.0 00:05:47.098 LIB libspdk_bdev_uring.a 00:05:47.098 CC module/bdev/nvme/bdev_mdns_client.o 00:05:47.098 CC module/bdev/nvme/vbdev_opal.o 00:05:47.098 SO libspdk_bdev_uring.so.6.0 00:05:47.098 SYMLINK libspdk_bdev_ftl.so 00:05:47.098 SYMLINK libspdk_bdev_split.so 00:05:47.098 CC module/bdev/raid/bdev_raid_sb.o 00:05:47.098 CC module/bdev/raid/raid0.o 00:05:47.098 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:47.098 CC module/bdev/raid/raid1.o 00:05:47.356 LIB libspdk_bdev_aio.a 00:05:47.356 SYMLINK libspdk_bdev_uring.so 00:05:47.356 SO libspdk_bdev_aio.so.6.0 00:05:47.356 SYMLINK libspdk_bdev_aio.so 00:05:47.356 CC module/bdev/raid/concat.o 00:05:47.356 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:47.356 CC module/bdev/iscsi/bdev_iscsi.o 00:05:47.356 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:47.615 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:47.615 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:47.615 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:47.874 LIB libspdk_bdev_raid.a 00:05:47.874 LIB libspdk_bdev_iscsi.a 00:05:48.133 SO libspdk_bdev_iscsi.so.6.0 00:05:48.133 SO libspdk_bdev_raid.so.6.0 00:05:48.133 LIB libspdk_bdev_virtio.a 00:05:48.133 SYMLINK libspdk_bdev_iscsi.so 00:05:48.133 SO libspdk_bdev_virtio.so.6.0 00:05:48.133 SYMLINK libspdk_bdev_raid.so 00:05:48.133 SYMLINK libspdk_bdev_virtio.so 00:05:48.391 LIB libspdk_bdev_nvme.a 00:05:48.391 SO libspdk_bdev_nvme.so.7.0 00:05:48.649 SYMLINK libspdk_bdev_nvme.so 00:05:49.217 CC module/event/subsystems/scheduler/scheduler.o 00:05:49.217 CC module/event/subsystems/sock/sock.o 00:05:49.217 CC module/event/subsystems/iobuf/iobuf.o 00:05:49.217 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:49.217 CC module/event/subsystems/keyring/keyring.o 00:05:49.217 CC module/event/subsystems/vmd/vmd.o 00:05:49.217 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:49.217 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:49.217 LIB libspdk_event_scheduler.a 00:05:49.218 LIB libspdk_event_keyring.a 00:05:49.218 SO libspdk_event_scheduler.so.4.0 00:05:49.218 LIB libspdk_event_sock.a 00:05:49.218 LIB libspdk_event_vhost_blk.a 00:05:49.218 LIB libspdk_event_vmd.a 00:05:49.218 LIB libspdk_event_iobuf.a 00:05:49.218 SO libspdk_event_keyring.so.1.0 00:05:49.218 SO libspdk_event_vhost_blk.so.3.0 00:05:49.218 SO libspdk_event_sock.so.5.0 00:05:49.218 SO libspdk_event_iobuf.so.3.0 00:05:49.218 SO libspdk_event_vmd.so.6.0 00:05:49.476 SYMLINK libspdk_event_scheduler.so 00:05:49.476 SYMLINK libspdk_event_keyring.so 00:05:49.476 SYMLINK libspdk_event_vhost_blk.so 00:05:49.476 SYMLINK libspdk_event_sock.so 00:05:49.476 SYMLINK libspdk_event_iobuf.so 00:05:49.476 SYMLINK libspdk_event_vmd.so 00:05:49.734 CC module/event/subsystems/accel/accel.o 00:05:49.734 LIB libspdk_event_accel.a 00:05:49.995 SO libspdk_event_accel.so.6.0 00:05:49.995 SYMLINK libspdk_event_accel.so 00:05:50.315 CC module/event/subsystems/bdev/bdev.o 00:05:50.315 LIB libspdk_event_bdev.a 00:05:50.315 SO libspdk_event_bdev.so.6.0 00:05:50.575 SYMLINK libspdk_event_bdev.so 00:05:50.575 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:50.575 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:50.575 CC module/event/subsystems/nbd/nbd.o 00:05:50.575 CC module/event/subsystems/ublk/ublk.o 00:05:50.575 CC module/event/subsystems/scsi/scsi.o 00:05:50.833 LIB libspdk_event_nbd.a 00:05:50.833 LIB libspdk_event_ublk.a 00:05:50.833 SO libspdk_event_nbd.so.6.0 00:05:50.833 SO libspdk_event_ublk.so.3.0 00:05:50.833 LIB libspdk_event_scsi.a 00:05:50.833 LIB libspdk_event_nvmf.a 00:05:50.833 SYMLINK libspdk_event_nbd.so 00:05:50.833 SO libspdk_event_scsi.so.6.0 00:05:50.833 SO libspdk_event_nvmf.so.6.0 00:05:50.833 SYMLINK libspdk_event_ublk.so 00:05:51.092 SYMLINK libspdk_event_nvmf.so 00:05:51.092 SYMLINK libspdk_event_scsi.so 00:05:51.351 CC module/event/subsystems/iscsi/iscsi.o 00:05:51.351 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:51.351 LIB libspdk_event_vhost_scsi.a 00:05:51.351 LIB libspdk_event_iscsi.a 00:05:51.351 SO libspdk_event_vhost_scsi.so.3.0 00:05:51.351 SO libspdk_event_iscsi.so.6.0 00:05:51.351 SYMLINK libspdk_event_vhost_scsi.so 00:05:51.611 SYMLINK libspdk_event_iscsi.so 00:05:51.611 SO libspdk.so.6.0 00:05:51.611 SYMLINK libspdk.so 00:05:51.871 CXX app/trace/trace.o 00:05:51.871 CC test/rpc_client/rpc_client_test.o 00:05:51.871 TEST_HEADER include/spdk/accel.h 00:05:51.871 CC app/trace_record/trace_record.o 00:05:51.871 TEST_HEADER include/spdk/accel_module.h 00:05:51.871 TEST_HEADER include/spdk/assert.h 00:05:51.871 TEST_HEADER include/spdk/barrier.h 00:05:51.871 TEST_HEADER include/spdk/base64.h 00:05:51.871 TEST_HEADER include/spdk/bdev.h 00:05:51.871 TEST_HEADER include/spdk/bdev_module.h 00:05:51.871 TEST_HEADER include/spdk/bdev_zone.h 00:05:51.871 TEST_HEADER include/spdk/bit_array.h 00:05:51.871 TEST_HEADER include/spdk/bit_pool.h 00:05:51.871 TEST_HEADER include/spdk/blob_bdev.h 00:05:51.871 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:51.871 TEST_HEADER include/spdk/blobfs.h 00:05:51.871 TEST_HEADER include/spdk/blob.h 00:05:51.871 TEST_HEADER include/spdk/conf.h 00:05:51.871 TEST_HEADER include/spdk/config.h 00:05:51.871 TEST_HEADER include/spdk/cpuset.h 00:05:51.871 CC app/nvmf_tgt/nvmf_main.o 00:05:51.871 TEST_HEADER include/spdk/crc16.h 00:05:51.871 TEST_HEADER include/spdk/crc32.h 00:05:51.871 TEST_HEADER include/spdk/crc64.h 00:05:51.871 TEST_HEADER include/spdk/dif.h 00:05:51.871 TEST_HEADER include/spdk/dma.h 00:05:51.871 TEST_HEADER include/spdk/endian.h 00:05:51.871 TEST_HEADER include/spdk/env_dpdk.h 00:05:51.871 TEST_HEADER include/spdk/env.h 00:05:51.871 TEST_HEADER include/spdk/event.h 00:05:51.871 TEST_HEADER include/spdk/fd_group.h 00:05:51.871 TEST_HEADER include/spdk/fd.h 00:05:51.871 TEST_HEADER include/spdk/file.h 00:05:51.871 TEST_HEADER include/spdk/ftl.h 00:05:51.871 TEST_HEADER include/spdk/gpt_spec.h 00:05:52.131 TEST_HEADER include/spdk/hexlify.h 00:05:52.131 TEST_HEADER include/spdk/histogram_data.h 00:05:52.131 CC test/thread/poller_perf/poller_perf.o 00:05:52.131 TEST_HEADER include/spdk/idxd.h 00:05:52.131 TEST_HEADER include/spdk/idxd_spec.h 00:05:52.131 TEST_HEADER include/spdk/init.h 00:05:52.131 TEST_HEADER include/spdk/ioat.h 00:05:52.131 TEST_HEADER include/spdk/ioat_spec.h 00:05:52.131 TEST_HEADER include/spdk/iscsi_spec.h 00:05:52.131 TEST_HEADER include/spdk/json.h 00:05:52.132 TEST_HEADER include/spdk/jsonrpc.h 00:05:52.132 TEST_HEADER include/spdk/keyring.h 00:05:52.132 TEST_HEADER include/spdk/keyring_module.h 00:05:52.132 TEST_HEADER include/spdk/likely.h 00:05:52.132 TEST_HEADER include/spdk/log.h 00:05:52.132 TEST_HEADER include/spdk/lvol.h 00:05:52.132 TEST_HEADER include/spdk/memory.h 00:05:52.132 TEST_HEADER include/spdk/mmio.h 00:05:52.132 TEST_HEADER include/spdk/nbd.h 00:05:52.132 TEST_HEADER include/spdk/net.h 00:05:52.132 TEST_HEADER include/spdk/notify.h 00:05:52.132 CC examples/util/zipf/zipf.o 00:05:52.132 TEST_HEADER include/spdk/nvme.h 00:05:52.132 TEST_HEADER include/spdk/nvme_intel.h 00:05:52.132 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:52.132 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:52.132 CC test/dma/test_dma/test_dma.o 00:05:52.132 TEST_HEADER include/spdk/nvme_spec.h 00:05:52.132 TEST_HEADER include/spdk/nvme_zns.h 00:05:52.132 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:52.132 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:52.132 TEST_HEADER include/spdk/nvmf.h 00:05:52.132 TEST_HEADER include/spdk/nvmf_spec.h 00:05:52.132 TEST_HEADER include/spdk/nvmf_transport.h 00:05:52.132 TEST_HEADER include/spdk/opal.h 00:05:52.132 TEST_HEADER include/spdk/opal_spec.h 00:05:52.132 CC test/app/bdev_svc/bdev_svc.o 00:05:52.132 TEST_HEADER include/spdk/pci_ids.h 00:05:52.132 TEST_HEADER include/spdk/pipe.h 00:05:52.132 TEST_HEADER include/spdk/queue.h 00:05:52.132 TEST_HEADER include/spdk/reduce.h 00:05:52.132 TEST_HEADER include/spdk/rpc.h 00:05:52.132 TEST_HEADER include/spdk/scheduler.h 00:05:52.132 TEST_HEADER include/spdk/scsi.h 00:05:52.132 CC test/env/mem_callbacks/mem_callbacks.o 00:05:52.132 TEST_HEADER include/spdk/scsi_spec.h 00:05:52.132 TEST_HEADER include/spdk/sock.h 00:05:52.132 TEST_HEADER include/spdk/stdinc.h 00:05:52.132 TEST_HEADER include/spdk/string.h 00:05:52.132 TEST_HEADER include/spdk/thread.h 00:05:52.132 TEST_HEADER include/spdk/trace.h 00:05:52.132 LINK rpc_client_test 00:05:52.132 TEST_HEADER include/spdk/trace_parser.h 00:05:52.132 TEST_HEADER include/spdk/tree.h 00:05:52.132 TEST_HEADER include/spdk/ublk.h 00:05:52.132 TEST_HEADER include/spdk/util.h 00:05:52.132 TEST_HEADER include/spdk/uuid.h 00:05:52.132 TEST_HEADER include/spdk/version.h 00:05:52.132 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:52.132 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:52.132 TEST_HEADER include/spdk/vhost.h 00:05:52.132 TEST_HEADER include/spdk/vmd.h 00:05:52.132 TEST_HEADER include/spdk/xor.h 00:05:52.132 TEST_HEADER include/spdk/zipf.h 00:05:52.132 CXX test/cpp_headers/accel.o 00:05:52.132 LINK poller_perf 00:05:52.132 LINK spdk_trace_record 00:05:52.132 LINK nvmf_tgt 00:05:52.391 LINK zipf 00:05:52.391 CXX test/cpp_headers/accel_module.o 00:05:52.391 LINK bdev_svc 00:05:52.391 LINK spdk_trace 00:05:52.391 CXX test/cpp_headers/assert.o 00:05:52.391 CXX test/cpp_headers/barrier.o 00:05:52.391 LINK test_dma 00:05:52.649 CC examples/ioat/perf/perf.o 00:05:52.649 CXX test/cpp_headers/base64.o 00:05:52.649 CC examples/ioat/verify/verify.o 00:05:52.649 CC test/event/event_perf/event_perf.o 00:05:52.649 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:52.649 CC test/env/vtophys/vtophys.o 00:05:52.649 CC app/iscsi_tgt/iscsi_tgt.o 00:05:52.649 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:52.649 CXX test/cpp_headers/bdev.o 00:05:52.907 LINK vtophys 00:05:52.907 LINK mem_callbacks 00:05:52.907 LINK event_perf 00:05:52.907 CC test/env/memory/memory_ut.o 00:05:52.907 LINK ioat_perf 00:05:52.907 LINK verify 00:05:52.907 LINK env_dpdk_post_init 00:05:52.907 CXX test/cpp_headers/bdev_module.o 00:05:52.907 LINK iscsi_tgt 00:05:53.265 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:53.265 CC test/event/reactor/reactor.o 00:05:53.265 CC test/event/reactor_perf/reactor_perf.o 00:05:53.265 CXX test/cpp_headers/bdev_zone.o 00:05:53.265 LINK nvme_fuzz 00:05:53.265 CC examples/sock/hello_world/hello_sock.o 00:05:53.265 CC examples/vmd/lsvmd/lsvmd.o 00:05:53.265 LINK reactor 00:05:53.265 CC examples/thread/thread/thread_ex.o 00:05:53.265 LINK reactor_perf 00:05:53.265 LINK interrupt_tgt 00:05:53.266 CXX test/cpp_headers/bit_array.o 00:05:53.554 LINK lsvmd 00:05:53.554 LINK hello_sock 00:05:53.554 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:53.554 CXX test/cpp_headers/bit_pool.o 00:05:53.554 CC app/spdk_tgt/spdk_tgt.o 00:05:53.554 LINK thread 00:05:53.554 CC app/spdk_lspci/spdk_lspci.o 00:05:53.554 CC test/event/app_repeat/app_repeat.o 00:05:53.554 CXX test/cpp_headers/blob_bdev.o 00:05:53.813 CC examples/vmd/led/led.o 00:05:53.813 CC test/event/scheduler/scheduler.o 00:05:53.813 LINK app_repeat 00:05:53.813 LINK spdk_tgt 00:05:53.813 CXX test/cpp_headers/blobfs_bdev.o 00:05:53.813 LINK spdk_lspci 00:05:53.813 LINK memory_ut 00:05:53.813 CC test/accel/dif/dif.o 00:05:54.073 LINK led 00:05:54.073 CXX test/cpp_headers/blobfs.o 00:05:54.073 LINK scheduler 00:05:54.073 CC app/spdk_nvme_perf/perf.o 00:05:54.073 CC test/env/pci/pci_ut.o 00:05:54.073 CXX test/cpp_headers/blob.o 00:05:54.073 CC test/blobfs/mkfs/mkfs.o 00:05:54.331 CC test/nvme/aer/aer.o 00:05:54.331 CXX test/cpp_headers/conf.o 00:05:54.331 CC test/lvol/esnap/esnap.o 00:05:54.331 CC examples/idxd/perf/perf.o 00:05:54.331 LINK mkfs 00:05:54.589 CXX test/cpp_headers/config.o 00:05:54.589 CXX test/cpp_headers/cpuset.o 00:05:54.590 LINK aer 00:05:54.590 LINK pci_ut 00:05:54.590 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:54.590 LINK dif 00:05:54.590 CXX test/cpp_headers/crc16.o 00:05:54.590 CC test/app/histogram_perf/histogram_perf.o 00:05:54.849 LINK idxd_perf 00:05:54.849 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:54.849 LINK histogram_perf 00:05:54.849 CXX test/cpp_headers/crc32.o 00:05:54.849 CC test/nvme/reset/reset.o 00:05:54.849 CC test/nvme/sgl/sgl.o 00:05:55.108 CC test/nvme/e2edp/nvme_dp.o 00:05:55.108 LINK spdk_nvme_perf 00:05:55.108 CXX test/cpp_headers/crc64.o 00:05:55.108 CXX test/cpp_headers/dif.o 00:05:55.108 CC examples/accel/perf/accel_perf.o 00:05:55.108 LINK iscsi_fuzz 00:05:55.108 LINK reset 00:05:55.108 LINK sgl 00:05:55.108 CXX test/cpp_headers/dma.o 00:05:55.367 CC app/spdk_nvme_identify/identify.o 00:05:55.367 LINK nvme_dp 00:05:55.367 CXX test/cpp_headers/endian.o 00:05:55.367 LINK vhost_fuzz 00:05:55.367 CXX test/cpp_headers/env_dpdk.o 00:05:55.627 CC test/app/jsoncat/jsoncat.o 00:05:55.627 CC test/bdev/bdevio/bdevio.o 00:05:55.627 LINK accel_perf 00:05:55.627 CC test/app/stub/stub.o 00:05:55.627 CXX test/cpp_headers/env.o 00:05:55.627 LINK jsoncat 00:05:55.627 CC test/nvme/overhead/overhead.o 00:05:55.627 LINK stub 00:05:55.886 CC examples/blob/hello_world/hello_blob.o 00:05:55.886 CC examples/blob/cli/blobcli.o 00:05:55.886 LINK bdevio 00:05:55.886 CXX test/cpp_headers/event.o 00:05:55.886 LINK overhead 00:05:55.886 CC examples/nvme/hello_world/hello_world.o 00:05:56.145 LINK spdk_nvme_identify 00:05:56.145 CC examples/nvme/reconnect/reconnect.o 00:05:56.145 LINK hello_blob 00:05:56.145 CXX test/cpp_headers/fd_group.o 00:05:56.145 CC examples/bdev/hello_world/hello_bdev.o 00:05:56.145 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:56.403 LINK hello_world 00:05:56.403 CC test/nvme/err_injection/err_injection.o 00:05:56.403 CXX test/cpp_headers/fd.o 00:05:56.403 LINK blobcli 00:05:56.403 CC app/spdk_nvme_discover/discovery_aer.o 00:05:56.403 CC app/spdk_top/spdk_top.o 00:05:56.403 LINK hello_bdev 00:05:56.662 LINK err_injection 00:05:56.662 CXX test/cpp_headers/file.o 00:05:56.662 LINK reconnect 00:05:56.662 LINK nvme_manage 00:05:56.662 LINK spdk_nvme_discover 00:05:56.662 CXX test/cpp_headers/ftl.o 00:05:56.922 CC app/vhost/vhost.o 00:05:56.922 CXX test/cpp_headers/gpt_spec.o 00:05:56.922 CC test/nvme/reserve/reserve.o 00:05:56.922 CC test/nvme/startup/startup.o 00:05:56.922 CC examples/nvme/arbitration/arbitration.o 00:05:56.922 CC examples/bdev/bdevperf/bdevperf.o 00:05:56.922 CXX test/cpp_headers/hexlify.o 00:05:57.181 CXX test/cpp_headers/histogram_data.o 00:05:57.181 LINK vhost 00:05:57.181 LINK reserve 00:05:57.181 LINK startup 00:05:57.181 CC app/spdk_dd/spdk_dd.o 00:05:57.181 LINK arbitration 00:05:57.182 CXX test/cpp_headers/idxd.o 00:05:57.441 CC test/nvme/simple_copy/simple_copy.o 00:05:57.441 CC test/nvme/connect_stress/connect_stress.o 00:05:57.441 CC test/nvme/boot_partition/boot_partition.o 00:05:57.441 CC app/fio/nvme/fio_plugin.o 00:05:57.441 CXX test/cpp_headers/idxd_spec.o 00:05:57.441 CC examples/nvme/hotplug/hotplug.o 00:05:57.441 LINK spdk_top 00:05:57.702 LINK simple_copy 00:05:57.702 LINK connect_stress 00:05:57.702 LINK spdk_dd 00:05:57.702 LINK boot_partition 00:05:57.702 CXX test/cpp_headers/init.o 00:05:57.702 LINK hotplug 00:05:57.702 CXX test/cpp_headers/ioat.o 00:05:57.964 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:57.964 CXX test/cpp_headers/ioat_spec.o 00:05:57.964 CC app/fio/bdev/fio_plugin.o 00:05:57.964 CC test/nvme/compliance/nvme_compliance.o 00:05:57.964 CC test/nvme/fused_ordering/fused_ordering.o 00:05:57.964 LINK bdevperf 00:05:57.964 LINK spdk_nvme 00:05:57.964 CXX test/cpp_headers/iscsi_spec.o 00:05:57.964 LINK cmb_copy 00:05:57.964 CC examples/nvme/abort/abort.o 00:05:58.223 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:58.223 LINK fused_ordering 00:05:58.223 CC test/nvme/fdp/fdp.o 00:05:58.223 CXX test/cpp_headers/json.o 00:05:58.223 LINK nvme_compliance 00:05:58.223 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:58.223 CC test/nvme/cuse/cuse.o 00:05:58.223 LINK doorbell_aers 00:05:58.482 CXX test/cpp_headers/jsonrpc.o 00:05:58.482 LINK spdk_bdev 00:05:58.482 CXX test/cpp_headers/keyring.o 00:05:58.482 CXX test/cpp_headers/keyring_module.o 00:05:58.482 LINK abort 00:05:58.482 CXX test/cpp_headers/likely.o 00:05:58.482 LINK pmr_persistence 00:05:58.482 CXX test/cpp_headers/log.o 00:05:58.482 LINK fdp 00:05:58.482 CXX test/cpp_headers/lvol.o 00:05:58.482 CXX test/cpp_headers/memory.o 00:05:58.482 CXX test/cpp_headers/mmio.o 00:05:58.482 CXX test/cpp_headers/nbd.o 00:05:58.741 CXX test/cpp_headers/net.o 00:05:58.741 CXX test/cpp_headers/notify.o 00:05:58.741 CXX test/cpp_headers/nvme.o 00:05:58.741 CXX test/cpp_headers/nvme_intel.o 00:05:58.741 CXX test/cpp_headers/nvme_ocssd.o 00:05:58.741 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:58.741 CXX test/cpp_headers/nvme_spec.o 00:05:58.741 CXX test/cpp_headers/nvme_zns.o 00:05:58.741 CXX test/cpp_headers/nvmf_cmd.o 00:05:58.999 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:58.999 CXX test/cpp_headers/nvmf.o 00:05:58.999 CXX test/cpp_headers/nvmf_spec.o 00:05:58.999 CC examples/nvmf/nvmf/nvmf.o 00:05:58.999 CXX test/cpp_headers/nvmf_transport.o 00:05:58.999 CXX test/cpp_headers/opal.o 00:05:58.999 CXX test/cpp_headers/opal_spec.o 00:05:58.999 CXX test/cpp_headers/pci_ids.o 00:05:58.999 CXX test/cpp_headers/pipe.o 00:05:58.999 CXX test/cpp_headers/queue.o 00:05:59.257 CXX test/cpp_headers/reduce.o 00:05:59.257 CXX test/cpp_headers/rpc.o 00:05:59.257 CXX test/cpp_headers/scheduler.o 00:05:59.257 CXX test/cpp_headers/scsi.o 00:05:59.257 LINK nvmf 00:05:59.257 CXX test/cpp_headers/scsi_spec.o 00:05:59.257 CXX test/cpp_headers/sock.o 00:05:59.257 CXX test/cpp_headers/stdinc.o 00:05:59.257 CXX test/cpp_headers/string.o 00:05:59.516 CXX test/cpp_headers/thread.o 00:05:59.516 CXX test/cpp_headers/trace.o 00:05:59.516 CXX test/cpp_headers/trace_parser.o 00:05:59.516 CXX test/cpp_headers/tree.o 00:05:59.516 CXX test/cpp_headers/ublk.o 00:05:59.516 CXX test/cpp_headers/util.o 00:05:59.517 CXX test/cpp_headers/uuid.o 00:05:59.517 CXX test/cpp_headers/version.o 00:05:59.517 LINK esnap 00:05:59.517 CXX test/cpp_headers/vfio_user_pci.o 00:05:59.517 CXX test/cpp_headers/vfio_user_spec.o 00:05:59.517 CXX test/cpp_headers/vhost.o 00:05:59.517 CXX test/cpp_headers/vmd.o 00:05:59.517 CXX test/cpp_headers/xor.o 00:05:59.517 CXX test/cpp_headers/zipf.o 00:05:59.517 LINK cuse 00:06:00.083 00:06:00.083 real 1m3.362s 00:06:00.083 user 6m31.894s 00:06:00.083 sys 1m33.249s 00:06:00.083 13:54:09 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:00.083 13:54:09 make -- common/autotest_common.sh@10 -- $ set +x 00:06:00.083 ************************************ 00:06:00.083 END TEST make 00:06:00.083 ************************************ 00:06:00.083 13:54:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:00.083 13:54:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:00.083 13:54:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:00.083 13:54:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:00.083 13:54:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:00.083 13:54:09 -- pm/common@44 -- $ pid=5359 00:06:00.083 13:54:09 -- pm/common@50 -- $ kill -TERM 5359 00:06:00.083 13:54:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:00.083 13:54:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:00.083 13:54:09 -- pm/common@44 -- $ pid=5361 00:06:00.083 13:54:09 -- pm/common@50 -- $ kill -TERM 5361 00:06:00.083 13:54:09 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:00.083 13:54:09 -- nvmf/common.sh@7 -- # uname -s 00:06:00.083 13:54:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.083 13:54:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.083 13:54:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.083 13:54:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.083 13:54:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.083 13:54:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.083 13:54:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.083 13:54:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.083 13:54:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.083 13:54:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.083 13:54:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:06:00.083 13:54:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:06:00.083 13:54:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.083 13:54:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.083 13:54:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:00.083 13:54:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.083 13:54:09 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.083 13:54:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.083 13:54:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.083 13:54:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.083 13:54:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.083 13:54:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.083 13:54:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.083 13:54:09 -- paths/export.sh@5 -- # export PATH 00:06:00.083 13:54:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.083 13:54:09 -- nvmf/common.sh@47 -- # : 0 00:06:00.083 13:54:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:00.083 13:54:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:00.083 13:54:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.083 13:54:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.083 13:54:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.083 13:54:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:00.083 13:54:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:00.083 13:54:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:00.083 13:54:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:00.083 13:54:09 -- spdk/autotest.sh@32 -- # uname -s 00:06:00.083 13:54:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:00.083 13:54:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:00.083 13:54:09 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:00.083 13:54:09 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:00.083 13:54:09 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:00.083 13:54:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:00.343 13:54:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:00.343 13:54:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:00.343 13:54:09 -- spdk/autotest.sh@48 -- # udevadm_pid=53026 00:06:00.343 13:54:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:00.343 13:54:09 -- pm/common@17 -- # local monitor 00:06:00.343 13:54:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:00.343 13:54:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:00.343 13:54:09 -- pm/common@25 -- # sleep 1 00:06:00.343 13:54:09 -- pm/common@21 -- # date +%s 00:06:00.343 13:54:09 -- pm/common@21 -- # date +%s 00:06:00.343 13:54:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:00.343 13:54:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721915649 00:06:00.343 13:54:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721915649 00:06:00.343 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721915649_collect-cpu-load.pm.log 00:06:00.343 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721915649_collect-vmstat.pm.log 00:06:01.279 13:54:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:01.279 13:54:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:01.279 13:54:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:01.279 13:54:10 -- common/autotest_common.sh@10 -- # set +x 00:06:01.279 13:54:10 -- spdk/autotest.sh@59 -- # create_test_list 00:06:01.279 13:54:10 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:01.279 13:54:10 -- common/autotest_common.sh@10 -- # set +x 00:06:01.279 13:54:10 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:01.279 13:54:10 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:01.279 13:54:10 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:01.279 13:54:10 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:01.280 13:54:10 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:01.280 13:54:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:01.280 13:54:10 -- common/autotest_common.sh@1455 -- # uname 00:06:01.280 13:54:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:01.280 13:54:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:01.280 13:54:10 -- common/autotest_common.sh@1475 -- # uname 00:06:01.280 13:54:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:01.280 13:54:10 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:06:01.280 13:54:10 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:06:01.280 13:54:10 -- spdk/autotest.sh@72 -- # hash lcov 00:06:01.280 13:54:10 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:01.280 13:54:10 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:06:01.280 --rc lcov_branch_coverage=1 00:06:01.280 --rc lcov_function_coverage=1 00:06:01.280 --rc genhtml_branch_coverage=1 00:06:01.280 --rc genhtml_function_coverage=1 00:06:01.280 --rc genhtml_legend=1 00:06:01.280 --rc geninfo_all_blocks=1 00:06:01.280 ' 00:06:01.280 13:54:10 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:06:01.280 --rc lcov_branch_coverage=1 00:06:01.280 --rc lcov_function_coverage=1 00:06:01.280 --rc genhtml_branch_coverage=1 00:06:01.280 --rc genhtml_function_coverage=1 00:06:01.280 --rc genhtml_legend=1 00:06:01.280 --rc geninfo_all_blocks=1 00:06:01.280 ' 00:06:01.280 13:54:10 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:06:01.280 --rc lcov_branch_coverage=1 00:06:01.280 --rc lcov_function_coverage=1 00:06:01.280 --rc genhtml_branch_coverage=1 00:06:01.280 --rc genhtml_function_coverage=1 00:06:01.280 --rc genhtml_legend=1 00:06:01.280 --rc geninfo_all_blocks=1 00:06:01.280 --no-external' 00:06:01.280 13:54:10 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:06:01.280 --rc lcov_branch_coverage=1 00:06:01.280 --rc lcov_function_coverage=1 00:06:01.280 --rc genhtml_branch_coverage=1 00:06:01.280 --rc genhtml_function_coverage=1 00:06:01.280 --rc genhtml_legend=1 00:06:01.280 --rc geninfo_all_blocks=1 00:06:01.280 --no-external' 00:06:01.280 13:54:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:06:01.280 lcov: LCOV version 1.14 00:06:01.280 13:54:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:19.375 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:19.375 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:29.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:29.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:29.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:29.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:29.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:29.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:29.909 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:29.909 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:29.909 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:29.909 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:29.909 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:29.909 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:29.909 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:30.167 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:30.167 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:30.167 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:30.167 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:30.167 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:30.167 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:30.168 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:30.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:33.529 13:54:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:06:33.529 13:54:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:33.529 13:54:42 -- common/autotest_common.sh@10 -- # set +x 00:06:33.529 13:54:42 -- spdk/autotest.sh@91 -- # rm -f 00:06:33.529 13:54:42 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:34.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:34.464 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:34.464 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:34.464 13:54:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:06:34.464 13:54:43 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:34.464 13:54:43 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:34.464 13:54:43 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:34.464 13:54:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:34.464 13:54:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:34.464 13:54:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:34.464 13:54:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:34.464 13:54:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:34.464 13:54:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:34.464 13:54:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:06:34.464 13:54:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:06:34.464 13:54:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:06:34.464 13:54:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:34.464 13:54:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:34.464 13:54:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:06:34.464 13:54:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:06:34.464 13:54:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:06:34.464 13:54:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:34.464 13:54:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:34.464 13:54:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:34.464 13:54:43 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:34.464 13:54:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:34.464 13:54:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:34.464 13:54:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:06:34.464 13:54:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:34.464 13:54:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:34.464 13:54:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:06:34.464 13:54:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:06:34.464 13:54:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:34.464 No valid GPT data, bailing 00:06:34.464 13:54:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:34.722 13:54:43 -- scripts/common.sh@391 -- # pt= 00:06:34.722 13:54:43 -- scripts/common.sh@392 -- # return 1 00:06:34.722 13:54:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:34.722 1+0 records in 00:06:34.722 1+0 records out 00:06:34.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00521217 s, 201 MB/s 00:06:34.722 13:54:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:34.722 13:54:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:34.722 13:54:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n2 00:06:34.722 13:54:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n2 pt 00:06:34.722 13:54:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:06:34.722 No valid GPT data, bailing 00:06:34.722 13:54:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:06:34.722 13:54:43 -- scripts/common.sh@391 -- # pt= 00:06:34.722 13:54:43 -- scripts/common.sh@392 -- # return 1 00:06:34.722 13:54:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:06:34.722 1+0 records in 00:06:34.722 1+0 records out 00:06:34.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516138 s, 203 MB/s 00:06:34.722 13:54:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:34.722 13:54:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:34.722 13:54:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n3 00:06:34.722 13:54:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n3 pt 00:06:34.722 13:54:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:06:34.722 No valid GPT data, bailing 00:06:34.722 13:54:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:06:34.722 13:54:43 -- scripts/common.sh@391 -- # pt= 00:06:34.722 13:54:43 -- scripts/common.sh@392 -- # return 1 00:06:34.722 13:54:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:06:34.722 1+0 records in 00:06:34.722 1+0 records out 00:06:34.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00628244 s, 167 MB/s 00:06:34.722 13:54:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:34.722 13:54:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:34.722 13:54:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:06:34.722 13:54:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:06:34.722 13:54:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:34.722 No valid GPT data, bailing 00:06:34.722 13:54:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:34.722 13:54:44 -- scripts/common.sh@391 -- # pt= 00:06:34.722 13:54:44 -- scripts/common.sh@392 -- # return 1 00:06:34.722 13:54:44 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:34.981 1+0 records in 00:06:34.981 1+0 records out 00:06:34.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557883 s, 188 MB/s 00:06:34.981 13:54:44 -- spdk/autotest.sh@118 -- # sync 00:06:34.981 13:54:44 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:34.981 13:54:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:34.981 13:54:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:37.562 13:54:46 -- spdk/autotest.sh@124 -- # uname -s 00:06:37.562 13:54:46 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:06:37.562 13:54:46 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:37.562 13:54:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.562 13:54:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.562 13:54:46 -- common/autotest_common.sh@10 -- # set +x 00:06:37.562 ************************************ 00:06:37.562 START TEST setup.sh 00:06:37.562 ************************************ 00:06:37.562 13:54:46 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:37.562 * Looking for test storage... 00:06:37.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:37.562 13:54:46 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:06:37.562 13:54:46 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:37.562 13:54:46 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:37.562 13:54:46 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.562 13:54:46 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.562 13:54:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:37.562 ************************************ 00:06:37.562 START TEST acl 00:06:37.562 ************************************ 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:37.562 * Looking for test storage... 00:06:37.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:37.562 13:54:46 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:37.562 13:54:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:37.562 13:54:46 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:06:37.562 13:54:46 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:06:37.562 13:54:46 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:06:37.562 13:54:46 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:06:37.562 13:54:46 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:06:37.562 13:54:46 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:37.562 13:54:46 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:38.501 13:54:47 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:06:38.501 13:54:47 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:06:38.501 13:54:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:38.501 13:54:47 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:06:38.501 13:54:47 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:06:38.501 13:54:47 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:39.070 13:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:06:39.070 13:54:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:39.070 13:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:39.070 Hugepages 00:06:39.070 node hugesize free / total 00:06:39.070 13:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:39.070 13:54:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:39.070 13:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:39.070 00:06:39.070 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:39.070 13:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:39.070 13:54:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:39.070 13:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:39.330 13:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:06:39.330 13:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:06:39.330 13:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:39.330 13:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:39.330 13:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:06:39.330 13:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:39.330 13:54:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:39.330 13:54:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:39.330 13:54:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:39.330 13:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:39.590 13:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:06:39.590 13:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:39.590 13:54:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:39.590 13:54:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:39.590 13:54:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:39.590 13:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:39.590 13:54:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:06:39.590 13:54:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:06:39.590 13:54:48 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.590 13:54:48 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.590 13:54:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:39.590 ************************************ 00:06:39.590 START TEST denied 00:06:39.590 ************************************ 00:06:39.590 13:54:48 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:06:39.590 13:54:48 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:06:39.590 13:54:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:06:39.590 13:54:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:06:39.590 13:54:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:06:39.590 13:54:48 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:40.578 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:06:40.578 13:54:49 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:06:40.578 13:54:49 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:06:40.578 13:54:49 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:06:40.578 13:54:49 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:06:40.578 13:54:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:06:40.578 13:54:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:40.578 13:54:49 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:40.578 13:54:49 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:06:40.578 13:54:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:40.578 13:54:49 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:41.516 00:06:41.516 real 0m1.844s 00:06:41.516 user 0m0.661s 00:06:41.516 sys 0m1.175s 00:06:41.516 13:54:50 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.516 13:54:50 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:06:41.516 ************************************ 00:06:41.516 END TEST denied 00:06:41.516 ************************************ 00:06:41.516 13:54:50 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:41.516 13:54:50 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.516 13:54:50 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.516 13:54:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:41.516 ************************************ 00:06:41.516 START TEST allowed 00:06:41.516 ************************************ 00:06:41.516 13:54:50 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:06:41.516 13:54:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:06:41.516 13:54:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:06:41.516 13:54:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:06:41.516 13:54:50 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:41.516 13:54:50 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:06:42.452 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:42.452 13:54:51 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:06:42.452 13:54:51 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:06:42.452 13:54:51 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:06:42.452 13:54:51 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:06:42.452 13:54:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:06:42.452 13:54:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:42.452 13:54:51 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:42.452 13:54:51 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:06:42.452 13:54:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:42.452 13:54:51 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:43.394 00:06:43.394 real 0m1.786s 00:06:43.394 user 0m0.726s 00:06:43.394 sys 0m1.073s 00:06:43.394 13:54:52 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.394 13:54:52 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:43.394 ************************************ 00:06:43.394 END TEST allowed 00:06:43.394 ************************************ 00:06:43.394 ************************************ 00:06:43.394 END TEST acl 00:06:43.394 ************************************ 00:06:43.394 00:06:43.394 real 0m5.934s 00:06:43.394 user 0m2.394s 00:06:43.394 sys 0m3.568s 00:06:43.394 13:54:52 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.394 13:54:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:43.394 13:54:52 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:43.394 13:54:52 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.394 13:54:52 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.394 13:54:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:43.394 ************************************ 00:06:43.394 START TEST hugepages 00:06:43.394 ************************************ 00:06:43.394 13:54:52 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:43.394 * Looking for test storage... 00:06:43.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6022824 kB' 'MemAvailable: 7406544 kB' 'Buffers: 2436 kB' 'Cached: 1598000 kB' 'SwapCached: 0 kB' 'Active: 437116 kB' 'Inactive: 1269104 kB' 'Active(anon): 116272 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 107440 kB' 'Mapped: 48648 kB' 'Shmem: 10488 kB' 'KReclaimable: 61424 kB' 'Slab: 135088 kB' 'SReclaimable: 61424 kB' 'SUnreclaim: 73664 kB' 'KernelStack: 6340 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 339060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.394 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.395 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:43.396 13:54:52 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:43.396 13:54:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.396 13:54:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.396 13:54:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:43.396 ************************************ 00:06:43.396 START TEST default_setup 00:06:43.396 ************************************ 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:43.396 13:54:52 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:44.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:44.397 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.397 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8048292 kB' 'MemAvailable: 9431876 kB' 'Buffers: 2436 kB' 'Cached: 1597988 kB' 'SwapCached: 0 kB' 'Active: 453780 kB' 'Inactive: 1269116 kB' 'Active(anon): 132936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 124124 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134648 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73524 kB' 'KernelStack: 6320 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.397 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.398 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8048292 kB' 'MemAvailable: 9431876 kB' 'Buffers: 2436 kB' 'Cached: 1597988 kB' 'SwapCached: 0 kB' 'Active: 453456 kB' 'Inactive: 1269116 kB' 'Active(anon): 132612 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123768 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134636 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73512 kB' 'KernelStack: 6320 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.399 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.400 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.401 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8048292 kB' 'MemAvailable: 9431876 kB' 'Buffers: 2436 kB' 'Cached: 1597988 kB' 'SwapCached: 0 kB' 'Active: 453464 kB' 'Inactive: 1269116 kB' 'Active(anon): 132620 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123768 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134636 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73512 kB' 'KernelStack: 6320 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.665 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.666 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:44.667 nr_hugepages=1024 00:06:44.667 resv_hugepages=0 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:44.667 surplus_hugepages=0 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:44.667 anon_hugepages=0 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8048292 kB' 'MemAvailable: 9431876 kB' 'Buffers: 2436 kB' 'Cached: 1597988 kB' 'SwapCached: 0 kB' 'Active: 453436 kB' 'Inactive: 1269116 kB' 'Active(anon): 132592 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123744 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134636 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73512 kB' 'KernelStack: 6320 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.667 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.668 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8048292 kB' 'MemUsed: 4193688 kB' 'SwapCached: 0 kB' 'Active: 453652 kB' 'Inactive: 1269116 kB' 'Active(anon): 132808 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1600424 kB' 'Mapped: 48672 kB' 'AnonPages: 123924 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61124 kB' 'Slab: 134636 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.669 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:44.670 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:44.670 node0=1024 expecting 1024 00:06:44.671 13:54:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:44.671 00:06:44.671 real 0m1.133s 00:06:44.671 user 0m0.491s 00:06:44.671 sys 0m0.581s 00:06:44.671 13:54:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.671 13:54:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:44.671 ************************************ 00:06:44.671 END TEST default_setup 00:06:44.671 ************************************ 00:06:44.671 13:54:53 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:44.671 13:54:53 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.671 13:54:53 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.671 13:54:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:44.671 ************************************ 00:06:44.671 START TEST per_node_1G_alloc 00:06:44.671 ************************************ 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:44.671 13:54:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:45.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:45.244 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:45.244 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:45.244 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9100908 kB' 'MemAvailable: 10484496 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 454000 kB' 'Inactive: 1269120 kB' 'Active(anon): 133156 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 124332 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134548 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73424 kB' 'KernelStack: 6344 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.245 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9100908 kB' 'MemAvailable: 10484496 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453416 kB' 'Inactive: 1269120 kB' 'Active(anon): 132572 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123720 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134552 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73428 kB' 'KernelStack: 6304 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.246 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.247 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9101428 kB' 'MemAvailable: 10485016 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453592 kB' 'Inactive: 1269120 kB' 'Active(anon): 132748 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123904 kB' 'Mapped: 48932 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134552 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73428 kB' 'KernelStack: 6320 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.248 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.249 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:45.250 nr_hugepages=512 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:45.250 resv_hugepages=0 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:45.250 surplus_hugepages=0 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:45.250 anon_hugepages=0 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:45.250 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9101032 kB' 'MemAvailable: 10484620 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453572 kB' 'Inactive: 1269120 kB' 'Active(anon): 132728 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123872 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134540 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73416 kB' 'KernelStack: 6320 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.251 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:45.252 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9101032 kB' 'MemUsed: 3140948 kB' 'SwapCached: 0 kB' 'Active: 453540 kB' 'Inactive: 1269120 kB' 'Active(anon): 132696 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1600428 kB' 'Mapped: 48672 kB' 'AnonPages: 123832 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61124 kB' 'Slab: 134540 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.253 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:45.254 node0=512 expecting 512 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:45.254 00:06:45.254 real 0m0.633s 00:06:45.254 user 0m0.302s 00:06:45.254 sys 0m0.371s 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.254 13:54:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:45.254 ************************************ 00:06:45.254 END TEST per_node_1G_alloc 00:06:45.254 ************************************ 00:06:45.254 13:54:54 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:45.254 13:54:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.254 13:54:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.254 13:54:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:45.254 ************************************ 00:06:45.254 START TEST even_2G_alloc 00:06:45.254 ************************************ 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:45.254 13:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:45.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:45.828 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:45.828 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8063816 kB' 'MemAvailable: 9447404 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453568 kB' 'Inactive: 1269120 kB' 'Active(anon): 132724 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 124096 kB' 'Mapped: 48788 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134592 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73468 kB' 'KernelStack: 6328 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.828 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.829 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8063564 kB' 'MemAvailable: 9447152 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453512 kB' 'Inactive: 1269120 kB' 'Active(anon): 132668 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123804 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134604 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73480 kB' 'KernelStack: 6320 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.830 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.831 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8063564 kB' 'MemAvailable: 9447152 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 454232 kB' 'Inactive: 1269120 kB' 'Active(anon): 133388 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 124612 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134604 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73480 kB' 'KernelStack: 6368 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 361004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.832 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.833 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:45.834 nr_hugepages=1024 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:45.834 resv_hugepages=0 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:45.834 surplus_hugepages=0 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:45.834 anon_hugepages=0 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.834 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8063564 kB' 'MemAvailable: 9447152 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453600 kB' 'Inactive: 1269120 kB' 'Active(anon): 132756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123904 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134596 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73472 kB' 'KernelStack: 6304 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.835 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.836 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.098 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8063564 kB' 'MemUsed: 4178416 kB' 'SwapCached: 0 kB' 'Active: 453600 kB' 'Inactive: 1269120 kB' 'Active(anon): 132756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1600428 kB' 'Mapped: 48672 kB' 'AnonPages: 123904 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61124 kB' 'Slab: 134596 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.099 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:46.100 node0=1024 expecting 1024 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:46.100 00:06:46.100 real 0m0.638s 00:06:46.100 user 0m0.293s 00:06:46.100 sys 0m0.380s 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.100 13:54:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:46.100 ************************************ 00:06:46.100 END TEST even_2G_alloc 00:06:46.100 ************************************ 00:06:46.100 13:54:55 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:46.100 13:54:55 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.100 13:54:55 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.100 13:54:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:46.100 ************************************ 00:06:46.100 START TEST odd_alloc 00:06:46.100 ************************************ 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:46.100 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:46.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.725 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:46.725 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8070716 kB' 'MemAvailable: 9454304 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453780 kB' 'Inactive: 1269120 kB' 'Active(anon): 132936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 124048 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134592 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73468 kB' 'KernelStack: 6336 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.725 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8070716 kB' 'MemAvailable: 9454304 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453744 kB' 'Inactive: 1269120 kB' 'Active(anon): 132900 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 124008 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134580 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73456 kB' 'KernelStack: 6304 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.726 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8071096 kB' 'MemAvailable: 9454684 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453760 kB' 'Inactive: 1269120 kB' 'Active(anon): 132916 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 124020 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134580 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73456 kB' 'KernelStack: 6304 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.727 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:46.728 nr_hugepages=1025 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:46.728 resv_hugepages=0 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:46.728 surplus_hugepages=0 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:46.728 anon_hugepages=0 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8071096 kB' 'MemAvailable: 9454684 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453468 kB' 'Inactive: 1269120 kB' 'Active(anon): 132624 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123984 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134580 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73456 kB' 'KernelStack: 6304 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.728 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8070872 kB' 'MemUsed: 4171108 kB' 'SwapCached: 0 kB' 'Active: 453464 kB' 'Inactive: 1269120 kB' 'Active(anon): 132620 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1600428 kB' 'Mapped: 48672 kB' 'AnonPages: 123984 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61124 kB' 'Slab: 134580 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.729 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.730 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:46.991 node0=1025 expecting 1025 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:06:46.991 00:06:46.991 real 0m0.749s 00:06:46.991 user 0m0.355s 00:06:46.991 sys 0m0.439s 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.991 13:54:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:46.991 ************************************ 00:06:46.991 END TEST odd_alloc 00:06:46.991 ************************************ 00:06:46.991 13:54:56 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:46.991 13:54:56 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.991 13:54:56 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.991 13:54:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:46.991 ************************************ 00:06:46.991 START TEST custom_alloc 00:06:46.991 ************************************ 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:46.991 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:47.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:47.252 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:47.252 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:47.517 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:06:47.517 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:47.517 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:47.517 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:47.517 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9125844 kB' 'MemAvailable: 10509432 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453792 kB' 'Inactive: 1269120 kB' 'Active(anon): 132948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 124080 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134452 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73328 kB' 'KernelStack: 6336 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.518 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9125844 kB' 'MemAvailable: 10509432 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453632 kB' 'Inactive: 1269120 kB' 'Active(anon): 132788 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 124148 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134452 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73328 kB' 'KernelStack: 6304 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.519 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.520 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.521 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9125844 kB' 'MemAvailable: 10509432 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453568 kB' 'Inactive: 1269120 kB' 'Active(anon): 132724 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 124096 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134452 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73328 kB' 'KernelStack: 6320 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.522 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.523 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:47.524 nr_hugepages=512 00:06:47.524 resv_hugepages=0 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:47.524 surplus_hugepages=0 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:47.524 anon_hugepages=0 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9125844 kB' 'MemAvailable: 10509432 kB' 'Buffers: 2436 kB' 'Cached: 1597992 kB' 'SwapCached: 0 kB' 'Active: 453780 kB' 'Inactive: 1269120 kB' 'Active(anon): 132936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 124016 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 61124 kB' 'Slab: 134448 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73324 kB' 'KernelStack: 6304 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.524 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.525 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9125844 kB' 'MemUsed: 3116136 kB' 'SwapCached: 0 kB' 'Active: 453612 kB' 'Inactive: 1269120 kB' 'Active(anon): 132768 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 1600428 kB' 'Mapped: 48688 kB' 'AnonPages: 124100 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61124 kB' 'Slab: 134448 kB' 'SReclaimable: 61124 kB' 'SUnreclaim: 73324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.526 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:47.527 node0=512 expecting 512 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:47.527 00:06:47.527 real 0m0.696s 00:06:47.527 user 0m0.315s 00:06:47.527 sys 0m0.425s 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.527 13:54:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:47.527 ************************************ 00:06:47.527 END TEST custom_alloc 00:06:47.527 ************************************ 00:06:47.527 13:54:56 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:47.527 13:54:56 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.527 13:54:56 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.527 13:54:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:47.527 ************************************ 00:06:47.527 START TEST no_shrink_alloc 00:06:47.527 ************************************ 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:47.527 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:47.528 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:47.528 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:47.528 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:47.528 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:47.528 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:47.528 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:47.528 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:47.528 13:54:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:48.096 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:48.096 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:48.096 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8079756 kB' 'MemAvailable: 9463340 kB' 'Buffers: 2436 kB' 'Cached: 1597996 kB' 'SwapCached: 0 kB' 'Active: 448308 kB' 'Inactive: 1269124 kB' 'Active(anon): 127464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 118360 kB' 'Mapped: 48080 kB' 'Shmem: 10464 kB' 'KReclaimable: 61112 kB' 'Slab: 134216 kB' 'SReclaimable: 61112 kB' 'SUnreclaim: 73104 kB' 'KernelStack: 6192 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54824 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.096 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:48.097 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8079756 kB' 'MemAvailable: 9463340 kB' 'Buffers: 2436 kB' 'Cached: 1597996 kB' 'SwapCached: 0 kB' 'Active: 448200 kB' 'Inactive: 1269124 kB' 'Active(anon): 127356 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 118456 kB' 'Mapped: 47948 kB' 'Shmem: 10464 kB' 'KReclaimable: 61112 kB' 'Slab: 134216 kB' 'SReclaimable: 61112 kB' 'SUnreclaim: 73104 kB' 'KernelStack: 6176 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54824 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.358 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8079756 kB' 'MemAvailable: 9463340 kB' 'Buffers: 2436 kB' 'Cached: 1597996 kB' 'SwapCached: 0 kB' 'Active: 447856 kB' 'Inactive: 1269124 kB' 'Active(anon): 127012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 118428 kB' 'Mapped: 47948 kB' 'Shmem: 10464 kB' 'KReclaimable: 61112 kB' 'Slab: 134216 kB' 'SReclaimable: 61112 kB' 'SUnreclaim: 73104 kB' 'KernelStack: 6192 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54840 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.359 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:48.360 nr_hugepages=1024 00:06:48.360 resv_hugepages=0 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:48.360 surplus_hugepages=0 00:06:48.360 anon_hugepages=0 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.360 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8079504 kB' 'MemAvailable: 9463088 kB' 'Buffers: 2436 kB' 'Cached: 1597996 kB' 'SwapCached: 0 kB' 'Active: 448436 kB' 'Inactive: 1269124 kB' 'Active(anon): 127592 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 118872 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61112 kB' 'Slab: 134216 kB' 'SReclaimable: 61112 kB' 'SUnreclaim: 73104 kB' 'KernelStack: 6192 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 338388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54824 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:48.361 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8080136 kB' 'MemUsed: 4161844 kB' 'SwapCached: 0 kB' 'Active: 447780 kB' 'Inactive: 1269124 kB' 'Active(anon): 126936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'FilePages: 1600432 kB' 'Mapped: 47936 kB' 'AnonPages: 118332 kB' 'Shmem: 10464 kB' 'KernelStack: 6176 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61112 kB' 'Slab: 134208 kB' 'SReclaimable: 61112 kB' 'SUnreclaim: 73096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:48.362 node0=1024 expecting 1024 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:48.362 13:54:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:48.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:48.930 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:48.930 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:48.930 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8084240 kB' 'MemAvailable: 9467824 kB' 'Buffers: 2436 kB' 'Cached: 1597996 kB' 'SwapCached: 0 kB' 'Active: 448228 kB' 'Inactive: 1269124 kB' 'Active(anon): 127384 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118752 kB' 'Mapped: 48116 kB' 'Shmem: 10464 kB' 'KReclaimable: 61112 kB' 'Slab: 134200 kB' 'SReclaimable: 61112 kB' 'SUnreclaim: 73088 kB' 'KernelStack: 6216 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54872 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.930 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.931 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8084492 kB' 'MemAvailable: 9468076 kB' 'Buffers: 2436 kB' 'Cached: 1597996 kB' 'SwapCached: 0 kB' 'Active: 447868 kB' 'Inactive: 1269124 kB' 'Active(anon): 127024 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118428 kB' 'Mapped: 47936 kB' 'Shmem: 10464 kB' 'KReclaimable: 61112 kB' 'Slab: 134200 kB' 'SReclaimable: 61112 kB' 'SUnreclaim: 73088 kB' 'KernelStack: 6192 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54840 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.932 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.933 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8084240 kB' 'MemAvailable: 9467824 kB' 'Buffers: 2436 kB' 'Cached: 1597996 kB' 'SwapCached: 0 kB' 'Active: 447816 kB' 'Inactive: 1269124 kB' 'Active(anon): 126972 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118336 kB' 'Mapped: 47936 kB' 'Shmem: 10464 kB' 'KReclaimable: 61112 kB' 'Slab: 134200 kB' 'SReclaimable: 61112 kB' 'SUnreclaim: 73088 kB' 'KernelStack: 6176 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54856 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.934 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:48.935 nr_hugepages=1024 00:06:48.935 resv_hugepages=0 00:06:48.935 surplus_hugepages=0 00:06:48.935 anon_hugepages=0 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:48.935 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8084240 kB' 'MemAvailable: 9467824 kB' 'Buffers: 2436 kB' 'Cached: 1597996 kB' 'SwapCached: 0 kB' 'Active: 448072 kB' 'Inactive: 1269124 kB' 'Active(anon): 127228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118336 kB' 'Mapped: 47936 kB' 'Shmem: 10464 kB' 'KReclaimable: 61112 kB' 'Slab: 134200 kB' 'SReclaimable: 61112 kB' 'SUnreclaim: 73088 kB' 'KernelStack: 6176 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54856 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.936 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:48.937 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8083988 kB' 'MemUsed: 4157992 kB' 'SwapCached: 0 kB' 'Active: 448112 kB' 'Inactive: 1269124 kB' 'Active(anon): 127268 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1269124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1600432 kB' 'Mapped: 47936 kB' 'AnonPages: 118688 kB' 'Shmem: 10464 kB' 'KernelStack: 6192 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61112 kB' 'Slab: 134200 kB' 'SReclaimable: 61112 kB' 'SUnreclaim: 73088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.938 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:48.939 node0=1024 expecting 1024 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:48.939 00:06:48.939 real 0m1.421s 00:06:48.939 user 0m0.670s 00:06:48.939 sys 0m0.800s 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.939 13:54:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:48.939 ************************************ 00:06:48.939 END TEST no_shrink_alloc 00:06:48.939 ************************************ 00:06:49.198 13:54:58 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:49.198 13:54:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:49.198 13:54:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:49.198 13:54:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:49.198 13:54:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:49.198 13:54:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:49.198 13:54:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:49.198 13:54:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:49.198 13:54:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:49.198 ************************************ 00:06:49.198 END TEST hugepages 00:06:49.198 ************************************ 00:06:49.198 00:06:49.198 real 0m5.813s 00:06:49.198 user 0m2.630s 00:06:49.198 sys 0m3.338s 00:06:49.198 13:54:58 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.198 13:54:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:49.198 13:54:58 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:49.198 13:54:58 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.198 13:54:58 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.198 13:54:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:49.198 ************************************ 00:06:49.198 START TEST driver 00:06:49.198 ************************************ 00:06:49.198 13:54:58 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:49.198 * Looking for test storage... 00:06:49.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:49.198 13:54:58 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:49.198 13:54:58 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:49.198 13:54:58 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:50.133 13:54:59 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:50.133 13:54:59 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.133 13:54:59 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.133 13:54:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:50.133 ************************************ 00:06:50.133 START TEST guess_driver 00:06:50.133 ************************************ 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:06:50.133 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:50.133 Looking for driver=uio_pci_generic 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:50.133 13:54:59 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:51.068 13:55:00 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:51.634 00:06:51.634 real 0m1.663s 00:06:51.634 user 0m0.599s 00:06:51.634 sys 0m1.125s 00:06:51.634 13:55:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.634 ************************************ 00:06:51.634 END TEST guess_driver 00:06:51.634 ************************************ 00:06:51.634 13:55:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:51.893 ************************************ 00:06:51.893 END TEST driver 00:06:51.893 ************************************ 00:06:51.893 00:06:51.893 real 0m2.596s 00:06:51.893 user 0m0.877s 00:06:51.893 sys 0m1.875s 00:06:51.893 13:55:00 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.893 13:55:00 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:51.893 13:55:00 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:51.893 13:55:00 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.893 13:55:01 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.893 13:55:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:51.893 ************************************ 00:06:51.893 START TEST devices 00:06:51.893 ************************************ 00:06:51.893 13:55:01 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:51.893 * Looking for test storage... 00:06:51.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:51.893 13:55:01 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:51.893 13:55:01 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:51.893 13:55:01 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:51.893 13:55:01 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:52.831 13:55:01 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:52.831 13:55:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:52.831 13:55:01 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:52.831 13:55:01 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:52.831 13:55:01 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:52.831 13:55:01 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:52.831 13:55:01 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:52.831 13:55:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:52.831 13:55:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:52.831 13:55:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:52.831 13:55:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:52.831 13:55:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:52.831 13:55:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:52.831 13:55:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:52.831 13:55:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:52.831 No valid GPT data, bailing 00:06:52.831 13:55:02 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:52.831 13:55:02 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:52.831 13:55:02 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:52.831 13:55:02 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:52.831 13:55:02 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:52.831 13:55:02 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:52.831 13:55:02 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:52.831 13:55:02 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:52.831 13:55:02 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:52.831 13:55:02 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:52.831 13:55:02 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:52.831 13:55:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:06:52.831 13:55:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:52.831 13:55:02 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:52.831 13:55:02 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:52.831 13:55:02 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:06:52.831 13:55:02 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:06:52.831 13:55:02 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:06:52.831 No valid GPT data, bailing 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:06:53.091 13:55:02 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:06:53.091 13:55:02 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:06:53.091 13:55:02 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:06:53.091 No valid GPT data, bailing 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:06:53.091 13:55:02 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:06:53.091 13:55:02 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:06:53.091 13:55:02 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:53.091 No valid GPT data, bailing 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:53.091 13:55:02 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:53.091 13:55:02 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:53.091 13:55:02 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:53.091 13:55:02 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:53.091 13:55:02 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:53.091 13:55:02 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.091 13:55:02 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.091 13:55:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:53.091 ************************************ 00:06:53.091 START TEST nvme_mount 00:06:53.091 ************************************ 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:53.091 13:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:56.378 Creating new GPT entries in memory. 00:06:56.378 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:56.378 other utilities. 00:06:56.378 13:55:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:56.378 13:55:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:56.378 13:55:05 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:56.378 13:55:05 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:56.378 13:55:05 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:57.316 Creating new GPT entries in memory. 00:06:57.316 The operation has completed successfully. 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57284 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:57.316 13:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:57.574 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.574 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:57.574 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:57.574 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.574 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.574 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.574 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.574 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.833 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.833 13:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.833 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:57.833 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:57.833 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:57.833 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:57.833 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:57.833 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:57.833 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:57.833 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:57.833 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:57.833 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:57.833 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:57.833 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:57.833 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:58.093 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:58.093 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:58.093 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:58.093 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:58.093 13:55:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:58.352 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:58.352 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:58.352 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:58.352 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:58.352 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:58.352 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:58.611 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:58.611 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:58.870 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:58.870 13:55:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:58.870 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:58.871 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:58.871 13:55:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:58.871 13:55:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:59.130 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:59.130 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:59.130 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:59.130 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.130 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:59.130 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.389 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:59.389 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.389 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:59.389 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.389 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:59.389 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:59.389 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:59.389 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:59.389 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:59.389 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:59.389 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:59.649 13:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:59.649 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:59.649 00:06:59.649 real 0m6.363s 00:06:59.649 user 0m0.828s 00:06:59.649 sys 0m1.368s 00:06:59.650 13:55:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.650 13:55:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:59.650 ************************************ 00:06:59.650 END TEST nvme_mount 00:06:59.650 ************************************ 00:06:59.650 13:55:08 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:59.650 13:55:08 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.650 13:55:08 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.650 13:55:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:59.650 ************************************ 00:06:59.650 START TEST dm_mount 00:06:59.650 ************************************ 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:59.650 13:55:08 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:00.587 Creating new GPT entries in memory. 00:07:00.587 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:00.587 other utilities. 00:07:00.587 13:55:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:00.587 13:55:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:00.587 13:55:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:00.587 13:55:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:00.587 13:55:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:01.963 Creating new GPT entries in memory. 00:07:01.963 The operation has completed successfully. 00:07:01.963 13:55:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:01.963 13:55:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:01.963 13:55:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:01.963 13:55:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:01.963 13:55:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:07:02.895 The operation has completed successfully. 00:07:02.895 13:55:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:02.895 13:55:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:02.895 13:55:12 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57754 00:07:02.895 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:02.895 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:02.895 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:02.895 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:03.154 13:55:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:03.413 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:03.413 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:03.413 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:03.413 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:03.413 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:03.413 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:03.671 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:03.671 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:03.671 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:03.671 13:55:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:03.929 13:55:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:04.189 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:04.189 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:04.189 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:04.189 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:04.189 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:04.189 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:04.189 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:04.189 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:04.448 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:04.448 00:07:04.448 real 0m4.948s 00:07:04.448 user 0m0.563s 00:07:04.448 sys 0m0.944s 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.448 13:55:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:07:04.448 ************************************ 00:07:04.448 END TEST dm_mount 00:07:04.448 ************************************ 00:07:04.708 13:55:13 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:07:04.708 13:55:13 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:07:04.708 13:55:13 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:04.708 13:55:13 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:04.708 13:55:13 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:04.708 13:55:13 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:04.708 13:55:13 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:04.968 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:04.968 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:04.968 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:04.968 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:04.968 13:55:14 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:07:04.968 13:55:14 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:04.968 13:55:14 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:04.968 13:55:14 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:04.968 13:55:14 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:04.968 13:55:14 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:04.968 13:55:14 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:04.968 00:07:04.968 real 0m13.070s 00:07:04.968 user 0m2.069s 00:07:04.968 sys 0m3.121s 00:07:04.968 13:55:14 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.968 13:55:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:04.968 ************************************ 00:07:04.968 END TEST devices 00:07:04.968 ************************************ 00:07:04.968 ************************************ 00:07:04.968 END TEST setup.sh 00:07:04.968 ************************************ 00:07:04.968 00:07:04.968 real 0m27.761s 00:07:04.968 user 0m8.096s 00:07:04.968 sys 0m12.132s 00:07:04.968 13:55:14 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.968 13:55:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:04.968 13:55:14 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:05.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:05.906 Hugepages 00:07:05.906 node hugesize free / total 00:07:05.906 node0 1048576kB 0 / 0 00:07:05.906 node0 2048kB 2048 / 2048 00:07:05.906 00:07:05.906 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:05.906 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:05.906 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:05.906 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:07:06.166 13:55:15 -- spdk/autotest.sh@130 -- # uname -s 00:07:06.166 13:55:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:07:06.166 13:55:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:07:06.166 13:55:15 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:06.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:06.994 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.994 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.994 13:55:16 -- common/autotest_common.sh@1532 -- # sleep 1 00:07:08.372 13:55:17 -- common/autotest_common.sh@1533 -- # bdfs=() 00:07:08.372 13:55:17 -- common/autotest_common.sh@1533 -- # local bdfs 00:07:08.372 13:55:17 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:07:08.372 13:55:17 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:07:08.372 13:55:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:08.372 13:55:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:08.372 13:55:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:08.372 13:55:17 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:08.372 13:55:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:08.372 13:55:17 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:07:08.372 13:55:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:08.372 13:55:17 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:08.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:08.631 Waiting for block devices as requested 00:07:08.631 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:08.890 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:08.890 13:55:18 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:08.890 13:55:18 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:08.890 13:55:18 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:08.890 13:55:18 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:07:08.890 13:55:18 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:08.890 13:55:18 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:08.890 13:55:18 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:08.890 13:55:18 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:07:08.890 13:55:18 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:07:08.890 13:55:18 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:07:08.890 13:55:18 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:08.890 13:55:18 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:07:08.890 13:55:18 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:08.890 13:55:18 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:08.890 13:55:18 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:08.890 13:55:18 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:08.890 13:55:18 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:07:08.890 13:55:18 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:08.890 13:55:18 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:08.890 13:55:18 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:08.890 13:55:18 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:08.890 13:55:18 -- common/autotest_common.sh@1557 -- # continue 00:07:08.890 13:55:18 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:08.890 13:55:18 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:08.890 13:55:18 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:08.890 13:55:18 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:07:08.890 13:55:18 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:08.890 13:55:18 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:08.890 13:55:18 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:08.890 13:55:18 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:07:08.890 13:55:18 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:07:08.890 13:55:18 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:07:08.890 13:55:18 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:07:08.890 13:55:18 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:08.890 13:55:18 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:08.890 13:55:18 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:08.890 13:55:18 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:08.890 13:55:18 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:08.890 13:55:18 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:07:08.890 13:55:18 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:08.890 13:55:18 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:08.890 13:55:18 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:08.890 13:55:18 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:08.890 13:55:18 -- common/autotest_common.sh@1557 -- # continue 00:07:08.890 13:55:18 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:07:08.890 13:55:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:08.890 13:55:18 -- common/autotest_common.sh@10 -- # set +x 00:07:08.890 13:55:18 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:07:08.890 13:55:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:08.890 13:55:18 -- common/autotest_common.sh@10 -- # set +x 00:07:08.890 13:55:18 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:09.827 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:09.827 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.086 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.086 13:55:19 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:07:10.087 13:55:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.087 13:55:19 -- common/autotest_common.sh@10 -- # set +x 00:07:10.087 13:55:19 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:07:10.087 13:55:19 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:07:10.087 13:55:19 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:07:10.087 13:55:19 -- common/autotest_common.sh@1577 -- # bdfs=() 00:07:10.087 13:55:19 -- common/autotest_common.sh@1577 -- # local bdfs 00:07:10.087 13:55:19 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:07:10.087 13:55:19 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:10.087 13:55:19 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:10.087 13:55:19 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:10.087 13:55:19 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:10.087 13:55:19 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:10.087 13:55:19 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:07:10.087 13:55:19 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:10.087 13:55:19 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:10.087 13:55:19 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:10.087 13:55:19 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:10.087 13:55:19 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:10.087 13:55:19 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:10.087 13:55:19 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:10.087 13:55:19 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:10.087 13:55:19 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:10.087 13:55:19 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:07:10.087 13:55:19 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:07:10.087 13:55:19 -- common/autotest_common.sh@1593 -- # return 0 00:07:10.087 13:55:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:07:10.087 13:55:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:07:10.087 13:55:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:10.087 13:55:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:10.087 13:55:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:07:10.087 13:55:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.087 13:55:19 -- common/autotest_common.sh@10 -- # set +x 00:07:10.087 13:55:19 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:07:10.087 13:55:19 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:07:10.087 13:55:19 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:07:10.087 13:55:19 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:10.087 13:55:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.087 13:55:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.087 13:55:19 -- common/autotest_common.sh@10 -- # set +x 00:07:10.087 ************************************ 00:07:10.087 START TEST env 00:07:10.087 ************************************ 00:07:10.087 13:55:19 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:10.346 * Looking for test storage... 00:07:10.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:10.346 13:55:19 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:10.346 13:55:19 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.346 13:55:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.346 13:55:19 env -- common/autotest_common.sh@10 -- # set +x 00:07:10.346 ************************************ 00:07:10.346 START TEST env_memory 00:07:10.346 ************************************ 00:07:10.346 13:55:19 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:10.346 00:07:10.346 00:07:10.346 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.346 http://cunit.sourceforge.net/ 00:07:10.346 00:07:10.346 00:07:10.346 Suite: memory 00:07:10.346 Test: alloc and free memory map ...[2024-07-25 13:55:19.536117] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:10.346 passed 00:07:10.346 Test: mem map translation ...[2024-07-25 13:55:19.569854] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:10.346 [2024-07-25 13:55:19.569960] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:10.346 [2024-07-25 13:55:19.570049] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:10.346 [2024-07-25 13:55:19.570083] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:10.346 passed 00:07:10.346 Test: mem map registration ...[2024-07-25 13:55:19.643834] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:10.346 [2024-07-25 13:55:19.643951] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:10.606 passed 00:07:10.606 Test: mem map adjacent registrations ...passed 00:07:10.606 00:07:10.606 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.606 suites 1 1 n/a 0 0 00:07:10.606 tests 4 4 4 0 0 00:07:10.606 asserts 152 152 152 0 n/a 00:07:10.606 00:07:10.606 Elapsed time = 0.198 seconds 00:07:10.606 00:07:10.606 real 0m0.209s 00:07:10.606 user 0m0.196s 00:07:10.606 sys 0m0.010s 00:07:10.606 13:55:19 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.606 13:55:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:10.606 ************************************ 00:07:10.606 END TEST env_memory 00:07:10.606 ************************************ 00:07:10.606 13:55:19 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:10.606 13:55:19 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.606 13:55:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.606 13:55:19 env -- common/autotest_common.sh@10 -- # set +x 00:07:10.606 ************************************ 00:07:10.606 START TEST env_vtophys 00:07:10.606 ************************************ 00:07:10.606 13:55:19 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:10.606 EAL: lib.eal log level changed from notice to debug 00:07:10.606 EAL: Detected lcore 0 as core 0 on socket 0 00:07:10.606 EAL: Detected lcore 1 as core 0 on socket 0 00:07:10.606 EAL: Detected lcore 2 as core 0 on socket 0 00:07:10.606 EAL: Detected lcore 3 as core 0 on socket 0 00:07:10.606 EAL: Detected lcore 4 as core 0 on socket 0 00:07:10.606 EAL: Detected lcore 5 as core 0 on socket 0 00:07:10.606 EAL: Detected lcore 6 as core 0 on socket 0 00:07:10.606 EAL: Detected lcore 7 as core 0 on socket 0 00:07:10.606 EAL: Detected lcore 8 as core 0 on socket 0 00:07:10.606 EAL: Detected lcore 9 as core 0 on socket 0 00:07:10.606 EAL: Maximum logical cores by configuration: 128 00:07:10.606 EAL: Detected CPU lcores: 10 00:07:10.606 EAL: Detected NUMA nodes: 1 00:07:10.606 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:10.606 EAL: Detected shared linkage of DPDK 00:07:10.606 EAL: No shared files mode enabled, IPC will be disabled 00:07:10.606 EAL: Selected IOVA mode 'PA' 00:07:10.606 EAL: Probing VFIO support... 00:07:10.606 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:10.606 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:10.606 EAL: Ask a virtual area of 0x2e000 bytes 00:07:10.606 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:10.606 EAL: Setting up physically contiguous memory... 00:07:10.606 EAL: Setting maximum number of open files to 524288 00:07:10.606 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:10.606 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:10.606 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.606 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:10.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:10.606 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.606 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:10.606 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:10.606 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.606 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:10.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:10.606 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.606 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:10.606 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:10.606 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.606 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:10.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:10.606 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.606 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:10.606 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:10.606 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.606 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:10.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:10.606 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.606 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:10.606 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:10.606 EAL: Hugepages will be freed exactly as allocated. 00:07:10.606 EAL: No shared files mode enabled, IPC is disabled 00:07:10.606 EAL: No shared files mode enabled, IPC is disabled 00:07:10.606 EAL: TSC frequency is ~2290000 KHz 00:07:10.606 EAL: Main lcore 0 is ready (tid=7ff873460a00;cpuset=[0]) 00:07:10.606 EAL: Trying to obtain current memory policy. 00:07:10.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.606 EAL: Restoring previous memory policy: 0 00:07:10.606 EAL: request: mp_malloc_sync 00:07:10.606 EAL: No shared files mode enabled, IPC is disabled 00:07:10.606 EAL: Heap on socket 0 was expanded by 2MB 00:07:10.606 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:10.606 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:10.606 EAL: Mem event callback 'spdk:(nil)' registered 00:07:10.606 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:10.866 00:07:10.866 00:07:10.866 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.866 http://cunit.sourceforge.net/ 00:07:10.866 00:07:10.866 00:07:10.866 Suite: components_suite 00:07:10.866 Test: vtophys_malloc_test ...passed 00:07:10.866 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:10.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.866 EAL: Restoring previous memory policy: 4 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was expanded by 4MB 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was shrunk by 4MB 00:07:10.866 EAL: Trying to obtain current memory policy. 00:07:10.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.866 EAL: Restoring previous memory policy: 4 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was expanded by 6MB 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was shrunk by 6MB 00:07:10.866 EAL: Trying to obtain current memory policy. 00:07:10.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.866 EAL: Restoring previous memory policy: 4 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was expanded by 10MB 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was shrunk by 10MB 00:07:10.866 EAL: Trying to obtain current memory policy. 00:07:10.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.866 EAL: Restoring previous memory policy: 4 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was expanded by 18MB 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was shrunk by 18MB 00:07:10.866 EAL: Trying to obtain current memory policy. 00:07:10.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.866 EAL: Restoring previous memory policy: 4 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was expanded by 34MB 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was shrunk by 34MB 00:07:10.866 EAL: Trying to obtain current memory policy. 00:07:10.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.866 EAL: Restoring previous memory policy: 4 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was expanded by 66MB 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was shrunk by 66MB 00:07:10.866 EAL: Trying to obtain current memory policy. 00:07:10.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.866 EAL: Restoring previous memory policy: 4 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was expanded by 130MB 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was shrunk by 130MB 00:07:10.866 EAL: Trying to obtain current memory policy. 00:07:10.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.866 EAL: Restoring previous memory policy: 4 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.866 EAL: request: mp_malloc_sync 00:07:10.866 EAL: No shared files mode enabled, IPC is disabled 00:07:10.866 EAL: Heap on socket 0 was expanded by 258MB 00:07:10.866 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.126 EAL: request: mp_malloc_sync 00:07:11.126 EAL: No shared files mode enabled, IPC is disabled 00:07:11.126 EAL: Heap on socket 0 was shrunk by 258MB 00:07:11.126 EAL: Trying to obtain current memory policy. 00:07:11.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.126 EAL: Restoring previous memory policy: 4 00:07:11.126 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.126 EAL: request: mp_malloc_sync 00:07:11.126 EAL: No shared files mode enabled, IPC is disabled 00:07:11.126 EAL: Heap on socket 0 was expanded by 514MB 00:07:11.126 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.385 EAL: request: mp_malloc_sync 00:07:11.385 EAL: No shared files mode enabled, IPC is disabled 00:07:11.385 EAL: Heap on socket 0 was shrunk by 514MB 00:07:11.385 EAL: Trying to obtain current memory policy. 00:07:11.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.385 EAL: Restoring previous memory policy: 4 00:07:11.385 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.385 EAL: request: mp_malloc_sync 00:07:11.385 EAL: No shared files mode enabled, IPC is disabled 00:07:11.385 EAL: Heap on socket 0 was expanded by 1026MB 00:07:11.644 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.644 passed 00:07:11.644 00:07:11.644 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.644 suites 1 1 n/a 0 0 00:07:11.644 tests 2 2 2 0 0 00:07:11.644 asserts 5365 5365 5365 0 n/a 00:07:11.644 00:07:11.644 Elapsed time = 0.982 seconds 00:07:11.644 EAL: request: mp_malloc_sync 00:07:11.644 EAL: No shared files mode enabled, IPC is disabled 00:07:11.644 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:11.644 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.644 EAL: request: mp_malloc_sync 00:07:11.644 EAL: No shared files mode enabled, IPC is disabled 00:07:11.644 EAL: Heap on socket 0 was shrunk by 2MB 00:07:11.644 EAL: No shared files mode enabled, IPC is disabled 00:07:11.644 EAL: No shared files mode enabled, IPC is disabled 00:07:11.644 EAL: No shared files mode enabled, IPC is disabled 00:07:11.906 00:07:11.906 real 0m1.185s 00:07:11.906 user 0m0.646s 00:07:11.906 sys 0m0.410s 00:07:11.906 13:55:20 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.906 ************************************ 00:07:11.906 END TEST env_vtophys 00:07:11.906 ************************************ 00:07:11.906 13:55:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:11.906 13:55:21 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:11.906 13:55:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.906 13:55:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.906 13:55:21 env -- common/autotest_common.sh@10 -- # set +x 00:07:11.906 ************************************ 00:07:11.906 START TEST env_pci 00:07:11.906 ************************************ 00:07:11.906 13:55:21 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:11.906 00:07:11.906 00:07:11.906 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.906 http://cunit.sourceforge.net/ 00:07:11.906 00:07:11.906 00:07:11.906 Suite: pci 00:07:11.906 Test: pci_hook ...[2024-07-25 13:55:21.034431] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58953 has claimed it 00:07:11.906 passed 00:07:11.906 00:07:11.906 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.906 suites 1 1 n/a 0 0 00:07:11.906 tests 1 1 1 0 0 00:07:11.906 asserts 25 25 25 0 n/a 00:07:11.906 00:07:11.906 Elapsed time = 0.002 seconds 00:07:11.906 EAL: Cannot find device (10000:00:01.0) 00:07:11.906 EAL: Failed to attach device on primary process 00:07:11.906 ************************************ 00:07:11.906 END TEST env_pci 00:07:11.906 ************************************ 00:07:11.906 00:07:11.906 real 0m0.027s 00:07:11.906 user 0m0.009s 00:07:11.906 sys 0m0.017s 00:07:11.906 13:55:21 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.906 13:55:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:11.906 13:55:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:11.906 13:55:21 env -- env/env.sh@15 -- # uname 00:07:11.906 13:55:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:11.906 13:55:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:11.906 13:55:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:11.906 13:55:21 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:11.906 13:55:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.906 13:55:21 env -- common/autotest_common.sh@10 -- # set +x 00:07:11.906 ************************************ 00:07:11.906 START TEST env_dpdk_post_init 00:07:11.906 ************************************ 00:07:11.906 13:55:21 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:11.906 EAL: Detected CPU lcores: 10 00:07:11.906 EAL: Detected NUMA nodes: 1 00:07:11.906 EAL: Detected shared linkage of DPDK 00:07:11.906 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:11.906 EAL: Selected IOVA mode 'PA' 00:07:12.169 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:12.169 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:12.169 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:12.169 Starting DPDK initialization... 00:07:12.169 Starting SPDK post initialization... 00:07:12.169 SPDK NVMe probe 00:07:12.169 Attaching to 0000:00:10.0 00:07:12.169 Attaching to 0000:00:11.0 00:07:12.169 Attached to 0000:00:10.0 00:07:12.169 Attached to 0000:00:11.0 00:07:12.169 Cleaning up... 00:07:12.169 00:07:12.169 real 0m0.186s 00:07:12.169 user 0m0.060s 00:07:12.169 sys 0m0.026s 00:07:12.169 13:55:21 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.169 ************************************ 00:07:12.169 END TEST env_dpdk_post_init 00:07:12.169 ************************************ 00:07:12.169 13:55:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:12.169 13:55:21 env -- env/env.sh@26 -- # uname 00:07:12.169 13:55:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:12.169 13:55:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:12.169 13:55:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.169 13:55:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.169 13:55:21 env -- common/autotest_common.sh@10 -- # set +x 00:07:12.169 ************************************ 00:07:12.169 START TEST env_mem_callbacks 00:07:12.169 ************************************ 00:07:12.169 13:55:21 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:12.169 EAL: Detected CPU lcores: 10 00:07:12.169 EAL: Detected NUMA nodes: 1 00:07:12.169 EAL: Detected shared linkage of DPDK 00:07:12.169 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:12.169 EAL: Selected IOVA mode 'PA' 00:07:12.429 00:07:12.429 00:07:12.429 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.429 http://cunit.sourceforge.net/ 00:07:12.429 00:07:12.429 00:07:12.429 Suite: memory 00:07:12.429 Test: test ... 00:07:12.429 register 0x200000200000 2097152 00:07:12.429 malloc 3145728 00:07:12.429 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:12.429 register 0x200000400000 4194304 00:07:12.429 buf 0x200000500000 len 3145728 PASSED 00:07:12.429 malloc 64 00:07:12.429 buf 0x2000004fff40 len 64 PASSED 00:07:12.429 malloc 4194304 00:07:12.429 register 0x200000800000 6291456 00:07:12.429 buf 0x200000a00000 len 4194304 PASSED 00:07:12.429 free 0x200000500000 3145728 00:07:12.429 free 0x2000004fff40 64 00:07:12.429 unregister 0x200000400000 4194304 PASSED 00:07:12.429 free 0x200000a00000 4194304 00:07:12.429 unregister 0x200000800000 6291456 PASSED 00:07:12.429 malloc 8388608 00:07:12.429 register 0x200000400000 10485760 00:07:12.429 buf 0x200000600000 len 8388608 PASSED 00:07:12.429 free 0x200000600000 8388608 00:07:12.429 unregister 0x200000400000 10485760 PASSED 00:07:12.429 passed 00:07:12.429 00:07:12.429 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.429 suites 1 1 n/a 0 0 00:07:12.429 tests 1 1 1 0 0 00:07:12.429 asserts 15 15 15 0 n/a 00:07:12.429 00:07:12.429 Elapsed time = 0.010 seconds 00:07:12.429 00:07:12.429 real 0m0.149s 00:07:12.429 user 0m0.021s 00:07:12.429 sys 0m0.025s 00:07:12.429 13:55:21 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.429 13:55:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 ************************************ 00:07:12.429 END TEST env_mem_callbacks 00:07:12.429 ************************************ 00:07:12.429 00:07:12.429 real 0m2.193s 00:07:12.429 user 0m1.084s 00:07:12.429 sys 0m0.777s 00:07:12.429 13:55:21 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.429 13:55:21 env -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 ************************************ 00:07:12.429 END TEST env 00:07:12.429 ************************************ 00:07:12.429 13:55:21 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:12.429 13:55:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.429 13:55:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.429 13:55:21 -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 ************************************ 00:07:12.429 START TEST rpc 00:07:12.429 ************************************ 00:07:12.429 13:55:21 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:12.429 * Looking for test storage... 00:07:12.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:12.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.429 13:55:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59068 00:07:12.429 13:55:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.429 13:55:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59068 00:07:12.429 13:55:21 rpc -- common/autotest_common.sh@831 -- # '[' -z 59068 ']' 00:07:12.429 13:55:21 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.429 13:55:21 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.429 13:55:21 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.429 13:55:21 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.429 13:55:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 13:55:21 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:12.688 [2024-07-25 13:55:21.788517] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:12.688 [2024-07-25 13:55:21.788996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59068 ] 00:07:12.688 [2024-07-25 13:55:21.927425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.947 [2024-07-25 13:55:22.031396] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:12.947 [2024-07-25 13:55:22.031548] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59068' to capture a snapshot of events at runtime. 00:07:12.947 [2024-07-25 13:55:22.031583] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.947 [2024-07-25 13:55:22.031610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.947 [2024-07-25 13:55:22.031642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59068 for offline analysis/debug. 00:07:12.947 [2024-07-25 13:55:22.031691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.947 [2024-07-25 13:55:22.073658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.516 13:55:22 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.516 13:55:22 rpc -- common/autotest_common.sh@864 -- # return 0 00:07:13.516 13:55:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:13.516 13:55:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:13.516 13:55:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:13.516 13:55:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:13.516 13:55:22 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.516 13:55:22 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.516 13:55:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.516 ************************************ 00:07:13.516 START TEST rpc_integrity 00:07:13.516 ************************************ 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:13.516 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.516 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:13.516 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:13.516 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:13.516 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.516 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:13.516 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.516 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:13.516 { 00:07:13.516 "name": "Malloc0", 00:07:13.516 "aliases": [ 00:07:13.516 "d92a54d3-7f5c-4971-9253-bffca4a1be07" 00:07:13.516 ], 00:07:13.516 "product_name": "Malloc disk", 00:07:13.516 "block_size": 512, 00:07:13.516 "num_blocks": 16384, 00:07:13.516 "uuid": "d92a54d3-7f5c-4971-9253-bffca4a1be07", 00:07:13.516 "assigned_rate_limits": { 00:07:13.516 "rw_ios_per_sec": 0, 00:07:13.516 "rw_mbytes_per_sec": 0, 00:07:13.516 "r_mbytes_per_sec": 0, 00:07:13.516 "w_mbytes_per_sec": 0 00:07:13.516 }, 00:07:13.516 "claimed": false, 00:07:13.516 "zoned": false, 00:07:13.516 "supported_io_types": { 00:07:13.516 "read": true, 00:07:13.516 "write": true, 00:07:13.516 "unmap": true, 00:07:13.516 "flush": true, 00:07:13.516 "reset": true, 00:07:13.516 "nvme_admin": false, 00:07:13.516 "nvme_io": false, 00:07:13.516 "nvme_io_md": false, 00:07:13.516 "write_zeroes": true, 00:07:13.516 "zcopy": true, 00:07:13.516 "get_zone_info": false, 00:07:13.516 "zone_management": false, 00:07:13.516 "zone_append": false, 00:07:13.516 "compare": false, 00:07:13.516 "compare_and_write": false, 00:07:13.516 "abort": true, 00:07:13.516 "seek_hole": false, 00:07:13.516 "seek_data": false, 00:07:13.516 "copy": true, 00:07:13.516 "nvme_iov_md": false 00:07:13.516 }, 00:07:13.516 "memory_domains": [ 00:07:13.516 { 00:07:13.516 "dma_device_id": "system", 00:07:13.516 "dma_device_type": 1 00:07:13.516 }, 00:07:13.516 { 00:07:13.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.516 "dma_device_type": 2 00:07:13.516 } 00:07:13.516 ], 00:07:13.516 "driver_specific": {} 00:07:13.516 } 00:07:13.516 ]' 00:07:13.516 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:13.516 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:13.516 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.516 [2024-07-25 13:55:22.805396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:13.516 [2024-07-25 13:55:22.805447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:13.516 [2024-07-25 13:55:22.805463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x518da0 00:07:13.516 [2024-07-25 13:55:22.805470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:13.516 [2024-07-25 13:55:22.806866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:13.516 [2024-07-25 13:55:22.806899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:13.516 Passthru0 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.516 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.516 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.777 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.777 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:13.777 { 00:07:13.777 "name": "Malloc0", 00:07:13.777 "aliases": [ 00:07:13.777 "d92a54d3-7f5c-4971-9253-bffca4a1be07" 00:07:13.777 ], 00:07:13.777 "product_name": "Malloc disk", 00:07:13.777 "block_size": 512, 00:07:13.777 "num_blocks": 16384, 00:07:13.777 "uuid": "d92a54d3-7f5c-4971-9253-bffca4a1be07", 00:07:13.777 "assigned_rate_limits": { 00:07:13.777 "rw_ios_per_sec": 0, 00:07:13.777 "rw_mbytes_per_sec": 0, 00:07:13.777 "r_mbytes_per_sec": 0, 00:07:13.777 "w_mbytes_per_sec": 0 00:07:13.777 }, 00:07:13.777 "claimed": true, 00:07:13.777 "claim_type": "exclusive_write", 00:07:13.777 "zoned": false, 00:07:13.778 "supported_io_types": { 00:07:13.778 "read": true, 00:07:13.778 "write": true, 00:07:13.778 "unmap": true, 00:07:13.778 "flush": true, 00:07:13.778 "reset": true, 00:07:13.778 "nvme_admin": false, 00:07:13.778 "nvme_io": false, 00:07:13.778 "nvme_io_md": false, 00:07:13.778 "write_zeroes": true, 00:07:13.778 "zcopy": true, 00:07:13.778 "get_zone_info": false, 00:07:13.778 "zone_management": false, 00:07:13.778 "zone_append": false, 00:07:13.778 "compare": false, 00:07:13.778 "compare_and_write": false, 00:07:13.778 "abort": true, 00:07:13.778 "seek_hole": false, 00:07:13.778 "seek_data": false, 00:07:13.778 "copy": true, 00:07:13.778 "nvme_iov_md": false 00:07:13.778 }, 00:07:13.778 "memory_domains": [ 00:07:13.778 { 00:07:13.778 "dma_device_id": "system", 00:07:13.778 "dma_device_type": 1 00:07:13.778 }, 00:07:13.778 { 00:07:13.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.778 "dma_device_type": 2 00:07:13.778 } 00:07:13.778 ], 00:07:13.778 "driver_specific": {} 00:07:13.778 }, 00:07:13.778 { 00:07:13.778 "name": "Passthru0", 00:07:13.778 "aliases": [ 00:07:13.778 "48fa6ef8-c957-5f40-899f-872f82587b29" 00:07:13.778 ], 00:07:13.778 "product_name": "passthru", 00:07:13.778 "block_size": 512, 00:07:13.778 "num_blocks": 16384, 00:07:13.778 "uuid": "48fa6ef8-c957-5f40-899f-872f82587b29", 00:07:13.778 "assigned_rate_limits": { 00:07:13.778 "rw_ios_per_sec": 0, 00:07:13.778 "rw_mbytes_per_sec": 0, 00:07:13.778 "r_mbytes_per_sec": 0, 00:07:13.778 "w_mbytes_per_sec": 0 00:07:13.778 }, 00:07:13.778 "claimed": false, 00:07:13.778 "zoned": false, 00:07:13.778 "supported_io_types": { 00:07:13.778 "read": true, 00:07:13.778 "write": true, 00:07:13.778 "unmap": true, 00:07:13.778 "flush": true, 00:07:13.778 "reset": true, 00:07:13.778 "nvme_admin": false, 00:07:13.778 "nvme_io": false, 00:07:13.778 "nvme_io_md": false, 00:07:13.778 "write_zeroes": true, 00:07:13.778 "zcopy": true, 00:07:13.778 "get_zone_info": false, 00:07:13.778 "zone_management": false, 00:07:13.778 "zone_append": false, 00:07:13.778 "compare": false, 00:07:13.778 "compare_and_write": false, 00:07:13.778 "abort": true, 00:07:13.778 "seek_hole": false, 00:07:13.778 "seek_data": false, 00:07:13.778 "copy": true, 00:07:13.778 "nvme_iov_md": false 00:07:13.778 }, 00:07:13.778 "memory_domains": [ 00:07:13.778 { 00:07:13.778 "dma_device_id": "system", 00:07:13.778 "dma_device_type": 1 00:07:13.778 }, 00:07:13.778 { 00:07:13.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.778 "dma_device_type": 2 00:07:13.778 } 00:07:13.778 ], 00:07:13.778 "driver_specific": { 00:07:13.778 "passthru": { 00:07:13.778 "name": "Passthru0", 00:07:13.778 "base_bdev_name": "Malloc0" 00:07:13.778 } 00:07:13.778 } 00:07:13.778 } 00:07:13.778 ]' 00:07:13.778 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:13.778 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:13.778 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:13.778 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.778 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.778 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.778 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:13.778 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.778 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.778 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.778 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:13.778 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.778 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.778 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.778 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:13.778 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:13.778 ************************************ 00:07:13.778 END TEST rpc_integrity 00:07:13.778 ************************************ 00:07:13.778 13:55:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:13.778 00:07:13.778 real 0m0.321s 00:07:13.778 user 0m0.189s 00:07:13.778 sys 0m0.050s 00:07:13.778 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.778 13:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.778 13:55:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:13.778 13:55:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.778 13:55:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.778 13:55:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.778 ************************************ 00:07:13.778 START TEST rpc_plugins 00:07:13.778 ************************************ 00:07:13.778 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:07:13.778 13:55:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:13.778 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.778 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:13.778 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.778 13:55:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:13.778 13:55:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:13.778 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.778 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:13.778 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.778 13:55:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:13.778 { 00:07:13.778 "name": "Malloc1", 00:07:13.778 "aliases": [ 00:07:13.779 "fba532dd-2c4d-43bf-b099-a366c5f7c244" 00:07:13.779 ], 00:07:13.779 "product_name": "Malloc disk", 00:07:13.779 "block_size": 4096, 00:07:13.779 "num_blocks": 256, 00:07:13.779 "uuid": "fba532dd-2c4d-43bf-b099-a366c5f7c244", 00:07:13.779 "assigned_rate_limits": { 00:07:13.779 "rw_ios_per_sec": 0, 00:07:13.779 "rw_mbytes_per_sec": 0, 00:07:13.779 "r_mbytes_per_sec": 0, 00:07:13.779 "w_mbytes_per_sec": 0 00:07:13.779 }, 00:07:13.779 "claimed": false, 00:07:13.779 "zoned": false, 00:07:13.779 "supported_io_types": { 00:07:13.779 "read": true, 00:07:13.779 "write": true, 00:07:13.779 "unmap": true, 00:07:13.779 "flush": true, 00:07:13.779 "reset": true, 00:07:13.779 "nvme_admin": false, 00:07:13.779 "nvme_io": false, 00:07:13.779 "nvme_io_md": false, 00:07:13.779 "write_zeroes": true, 00:07:13.779 "zcopy": true, 00:07:13.779 "get_zone_info": false, 00:07:13.779 "zone_management": false, 00:07:13.779 "zone_append": false, 00:07:13.779 "compare": false, 00:07:13.779 "compare_and_write": false, 00:07:13.779 "abort": true, 00:07:13.779 "seek_hole": false, 00:07:13.779 "seek_data": false, 00:07:13.779 "copy": true, 00:07:13.779 "nvme_iov_md": false 00:07:13.779 }, 00:07:13.779 "memory_domains": [ 00:07:13.779 { 00:07:13.779 "dma_device_id": "system", 00:07:13.779 "dma_device_type": 1 00:07:13.779 }, 00:07:13.779 { 00:07:13.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.779 "dma_device_type": 2 00:07:13.779 } 00:07:13.779 ], 00:07:13.779 "driver_specific": {} 00:07:13.779 } 00:07:13.779 ]' 00:07:13.779 13:55:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:14.039 13:55:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:14.039 13:55:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:14.039 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.039 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:14.039 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.039 13:55:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:14.039 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.039 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:14.039 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.039 13:55:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:14.039 13:55:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:14.039 ************************************ 00:07:14.039 END TEST rpc_plugins 00:07:14.039 ************************************ 00:07:14.039 13:55:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:14.039 00:07:14.039 real 0m0.159s 00:07:14.039 user 0m0.090s 00:07:14.039 sys 0m0.031s 00:07:14.039 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.039 13:55:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:14.039 13:55:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:14.039 13:55:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.039 13:55:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.039 13:55:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.039 ************************************ 00:07:14.039 START TEST rpc_trace_cmd_test 00:07:14.039 ************************************ 00:07:14.039 13:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:07:14.039 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:14.039 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:14.039 13:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.039 13:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.039 13:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.039 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:14.039 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59068", 00:07:14.039 "tpoint_group_mask": "0x8", 00:07:14.039 "iscsi_conn": { 00:07:14.039 "mask": "0x2", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 }, 00:07:14.039 "scsi": { 00:07:14.039 "mask": "0x4", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 }, 00:07:14.039 "bdev": { 00:07:14.039 "mask": "0x8", 00:07:14.039 "tpoint_mask": "0xffffffffffffffff" 00:07:14.039 }, 00:07:14.039 "nvmf_rdma": { 00:07:14.039 "mask": "0x10", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 }, 00:07:14.039 "nvmf_tcp": { 00:07:14.039 "mask": "0x20", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 }, 00:07:14.039 "ftl": { 00:07:14.039 "mask": "0x40", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 }, 00:07:14.039 "blobfs": { 00:07:14.039 "mask": "0x80", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 }, 00:07:14.039 "dsa": { 00:07:14.039 "mask": "0x200", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 }, 00:07:14.039 "thread": { 00:07:14.039 "mask": "0x400", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 }, 00:07:14.039 "nvme_pcie": { 00:07:14.039 "mask": "0x800", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 }, 00:07:14.039 "iaa": { 00:07:14.039 "mask": "0x1000", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 }, 00:07:14.039 "nvme_tcp": { 00:07:14.039 "mask": "0x2000", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 }, 00:07:14.039 "bdev_nvme": { 00:07:14.039 "mask": "0x4000", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 }, 00:07:14.039 "sock": { 00:07:14.039 "mask": "0x8000", 00:07:14.039 "tpoint_mask": "0x0" 00:07:14.039 } 00:07:14.039 }' 00:07:14.039 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:14.039 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:07:14.039 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:14.299 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:14.299 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:14.299 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:14.299 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:14.299 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:14.299 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:14.299 ************************************ 00:07:14.299 END TEST rpc_trace_cmd_test 00:07:14.299 ************************************ 00:07:14.299 13:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:14.299 00:07:14.299 real 0m0.240s 00:07:14.299 user 0m0.197s 00:07:14.299 sys 0m0.032s 00:07:14.299 13:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.299 13:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.299 13:55:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:14.299 13:55:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:14.299 13:55:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:14.299 13:55:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.299 13:55:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.299 13:55:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.299 ************************************ 00:07:14.299 START TEST rpc_daemon_integrity 00:07:14.299 ************************************ 00:07:14.299 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:14.299 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:14.299 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.299 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:14.299 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.299 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:14.299 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:14.558 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:14.558 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:14.558 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.558 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:14.558 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.558 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:14.558 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:14.558 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.558 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:14.559 { 00:07:14.559 "name": "Malloc2", 00:07:14.559 "aliases": [ 00:07:14.559 "26be62c6-579f-411f-ac92-c570803b1a85" 00:07:14.559 ], 00:07:14.559 "product_name": "Malloc disk", 00:07:14.559 "block_size": 512, 00:07:14.559 "num_blocks": 16384, 00:07:14.559 "uuid": "26be62c6-579f-411f-ac92-c570803b1a85", 00:07:14.559 "assigned_rate_limits": { 00:07:14.559 "rw_ios_per_sec": 0, 00:07:14.559 "rw_mbytes_per_sec": 0, 00:07:14.559 "r_mbytes_per_sec": 0, 00:07:14.559 "w_mbytes_per_sec": 0 00:07:14.559 }, 00:07:14.559 "claimed": false, 00:07:14.559 "zoned": false, 00:07:14.559 "supported_io_types": { 00:07:14.559 "read": true, 00:07:14.559 "write": true, 00:07:14.559 "unmap": true, 00:07:14.559 "flush": true, 00:07:14.559 "reset": true, 00:07:14.559 "nvme_admin": false, 00:07:14.559 "nvme_io": false, 00:07:14.559 "nvme_io_md": false, 00:07:14.559 "write_zeroes": true, 00:07:14.559 "zcopy": true, 00:07:14.559 "get_zone_info": false, 00:07:14.559 "zone_management": false, 00:07:14.559 "zone_append": false, 00:07:14.559 "compare": false, 00:07:14.559 "compare_and_write": false, 00:07:14.559 "abort": true, 00:07:14.559 "seek_hole": false, 00:07:14.559 "seek_data": false, 00:07:14.559 "copy": true, 00:07:14.559 "nvme_iov_md": false 00:07:14.559 }, 00:07:14.559 "memory_domains": [ 00:07:14.559 { 00:07:14.559 "dma_device_id": "system", 00:07:14.559 "dma_device_type": 1 00:07:14.559 }, 00:07:14.559 { 00:07:14.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.559 "dma_device_type": 2 00:07:14.559 } 00:07:14.559 ], 00:07:14.559 "driver_specific": {} 00:07:14.559 } 00:07:14.559 ]' 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:14.559 [2024-07-25 13:55:23.699924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:14.559 [2024-07-25 13:55:23.699968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.559 [2024-07-25 13:55:23.699984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x57dbe0 00:07:14.559 [2024-07-25 13:55:23.699991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.559 [2024-07-25 13:55:23.701283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.559 [2024-07-25 13:55:23.701326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:14.559 Passthru0 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:14.559 { 00:07:14.559 "name": "Malloc2", 00:07:14.559 "aliases": [ 00:07:14.559 "26be62c6-579f-411f-ac92-c570803b1a85" 00:07:14.559 ], 00:07:14.559 "product_name": "Malloc disk", 00:07:14.559 "block_size": 512, 00:07:14.559 "num_blocks": 16384, 00:07:14.559 "uuid": "26be62c6-579f-411f-ac92-c570803b1a85", 00:07:14.559 "assigned_rate_limits": { 00:07:14.559 "rw_ios_per_sec": 0, 00:07:14.559 "rw_mbytes_per_sec": 0, 00:07:14.559 "r_mbytes_per_sec": 0, 00:07:14.559 "w_mbytes_per_sec": 0 00:07:14.559 }, 00:07:14.559 "claimed": true, 00:07:14.559 "claim_type": "exclusive_write", 00:07:14.559 "zoned": false, 00:07:14.559 "supported_io_types": { 00:07:14.559 "read": true, 00:07:14.559 "write": true, 00:07:14.559 "unmap": true, 00:07:14.559 "flush": true, 00:07:14.559 "reset": true, 00:07:14.559 "nvme_admin": false, 00:07:14.559 "nvme_io": false, 00:07:14.559 "nvme_io_md": false, 00:07:14.559 "write_zeroes": true, 00:07:14.559 "zcopy": true, 00:07:14.559 "get_zone_info": false, 00:07:14.559 "zone_management": false, 00:07:14.559 "zone_append": false, 00:07:14.559 "compare": false, 00:07:14.559 "compare_and_write": false, 00:07:14.559 "abort": true, 00:07:14.559 "seek_hole": false, 00:07:14.559 "seek_data": false, 00:07:14.559 "copy": true, 00:07:14.559 "nvme_iov_md": false 00:07:14.559 }, 00:07:14.559 "memory_domains": [ 00:07:14.559 { 00:07:14.559 "dma_device_id": "system", 00:07:14.559 "dma_device_type": 1 00:07:14.559 }, 00:07:14.559 { 00:07:14.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.559 "dma_device_type": 2 00:07:14.559 } 00:07:14.559 ], 00:07:14.559 "driver_specific": {} 00:07:14.559 }, 00:07:14.559 { 00:07:14.559 "name": "Passthru0", 00:07:14.559 "aliases": [ 00:07:14.559 "15f561aa-6ab1-58b2-90fb-0c73ba666cfa" 00:07:14.559 ], 00:07:14.559 "product_name": "passthru", 00:07:14.559 "block_size": 512, 00:07:14.559 "num_blocks": 16384, 00:07:14.559 "uuid": "15f561aa-6ab1-58b2-90fb-0c73ba666cfa", 00:07:14.559 "assigned_rate_limits": { 00:07:14.559 "rw_ios_per_sec": 0, 00:07:14.559 "rw_mbytes_per_sec": 0, 00:07:14.559 "r_mbytes_per_sec": 0, 00:07:14.559 "w_mbytes_per_sec": 0 00:07:14.559 }, 00:07:14.559 "claimed": false, 00:07:14.559 "zoned": false, 00:07:14.559 "supported_io_types": { 00:07:14.559 "read": true, 00:07:14.559 "write": true, 00:07:14.559 "unmap": true, 00:07:14.559 "flush": true, 00:07:14.559 "reset": true, 00:07:14.559 "nvme_admin": false, 00:07:14.559 "nvme_io": false, 00:07:14.559 "nvme_io_md": false, 00:07:14.559 "write_zeroes": true, 00:07:14.559 "zcopy": true, 00:07:14.559 "get_zone_info": false, 00:07:14.559 "zone_management": false, 00:07:14.559 "zone_append": false, 00:07:14.559 "compare": false, 00:07:14.559 "compare_and_write": false, 00:07:14.559 "abort": true, 00:07:14.559 "seek_hole": false, 00:07:14.559 "seek_data": false, 00:07:14.559 "copy": true, 00:07:14.559 "nvme_iov_md": false 00:07:14.559 }, 00:07:14.559 "memory_domains": [ 00:07:14.559 { 00:07:14.559 "dma_device_id": "system", 00:07:14.559 "dma_device_type": 1 00:07:14.559 }, 00:07:14.559 { 00:07:14.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.559 "dma_device_type": 2 00:07:14.559 } 00:07:14.559 ], 00:07:14.559 "driver_specific": { 00:07:14.559 "passthru": { 00:07:14.559 "name": "Passthru0", 00:07:14.559 "base_bdev_name": "Malloc2" 00:07:14.559 } 00:07:14.559 } 00:07:14.559 } 00:07:14.559 ]' 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:14.559 ************************************ 00:07:14.559 END TEST rpc_daemon_integrity 00:07:14.559 ************************************ 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:14.559 00:07:14.559 real 0m0.308s 00:07:14.559 user 0m0.193s 00:07:14.559 sys 0m0.047s 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.559 13:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:14.819 13:55:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:14.819 13:55:23 rpc -- rpc/rpc.sh@84 -- # killprocess 59068 00:07:14.819 13:55:23 rpc -- common/autotest_common.sh@950 -- # '[' -z 59068 ']' 00:07:14.819 13:55:23 rpc -- common/autotest_common.sh@954 -- # kill -0 59068 00:07:14.819 13:55:23 rpc -- common/autotest_common.sh@955 -- # uname 00:07:14.819 13:55:23 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.819 13:55:23 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59068 00:07:14.819 killing process with pid 59068 00:07:14.819 13:55:23 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.819 13:55:23 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.819 13:55:23 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59068' 00:07:14.819 13:55:23 rpc -- common/autotest_common.sh@969 -- # kill 59068 00:07:14.819 13:55:23 rpc -- common/autotest_common.sh@974 -- # wait 59068 00:07:15.077 00:07:15.077 real 0m2.638s 00:07:15.077 user 0m3.302s 00:07:15.077 sys 0m0.717s 00:07:15.077 13:55:24 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.077 13:55:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.077 ************************************ 00:07:15.077 END TEST rpc 00:07:15.077 ************************************ 00:07:15.077 13:55:24 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:15.077 13:55:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.077 13:55:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.077 13:55:24 -- common/autotest_common.sh@10 -- # set +x 00:07:15.077 ************************************ 00:07:15.077 START TEST skip_rpc 00:07:15.077 ************************************ 00:07:15.077 13:55:24 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:15.336 * Looking for test storage... 00:07:15.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:15.336 13:55:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:15.336 13:55:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:15.336 13:55:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:15.336 13:55:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.336 13:55:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.336 13:55:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.336 ************************************ 00:07:15.336 START TEST skip_rpc 00:07:15.336 ************************************ 00:07:15.336 13:55:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:07:15.336 13:55:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:15.336 13:55:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59266 00:07:15.336 13:55:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:15.336 13:55:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:15.336 [2024-07-25 13:55:24.512834] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:15.336 [2024-07-25 13:55:24.512907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59266 ] 00:07:15.595 [2024-07-25 13:55:24.651121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.595 [2024-07-25 13:55:24.749504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.595 [2024-07-25 13:55:24.790612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59266 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 59266 ']' 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 59266 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59266 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.862 killing process with pid 59266 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59266' 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 59266 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 59266 00:07:20.862 00:07:20.862 real 0m5.372s 00:07:20.862 user 0m5.059s 00:07:20.862 sys 0m0.229s 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.862 13:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.862 ************************************ 00:07:20.862 END TEST skip_rpc 00:07:20.862 ************************************ 00:07:20.862 13:55:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:20.862 13:55:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.862 13:55:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.862 13:55:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.862 ************************************ 00:07:20.862 START TEST skip_rpc_with_json 00:07:20.862 ************************************ 00:07:20.862 13:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:07:20.862 13:55:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:20.862 13:55:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59347 00:07:20.862 13:55:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.862 13:55:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:20.862 13:55:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59347 00:07:20.862 13:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 59347 ']' 00:07:20.863 13:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.863 13:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.863 13:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.863 13:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.863 13:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:20.863 [2024-07-25 13:55:29.934796] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:20.863 [2024-07-25 13:55:29.934862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59347 ] 00:07:20.863 [2024-07-25 13:55:30.074396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.121 [2024-07-25 13:55:30.182561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.121 [2024-07-25 13:55:30.225530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:21.689 [2024-07-25 13:55:30.804258] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:21.689 request: 00:07:21.689 { 00:07:21.689 "trtype": "tcp", 00:07:21.689 "method": "nvmf_get_transports", 00:07:21.689 "req_id": 1 00:07:21.689 } 00:07:21.689 Got JSON-RPC error response 00:07:21.689 response: 00:07:21.689 { 00:07:21.689 "code": -19, 00:07:21.689 "message": "No such device" 00:07:21.689 } 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:21.689 [2024-07-25 13:55:30.816350] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.689 13:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:21.689 { 00:07:21.689 "subsystems": [ 00:07:21.689 { 00:07:21.689 "subsystem": "keyring", 00:07:21.689 "config": [] 00:07:21.689 }, 00:07:21.689 { 00:07:21.689 "subsystem": "iobuf", 00:07:21.689 "config": [ 00:07:21.689 { 00:07:21.689 "method": "iobuf_set_options", 00:07:21.689 "params": { 00:07:21.689 "small_pool_count": 8192, 00:07:21.689 "large_pool_count": 1024, 00:07:21.689 "small_bufsize": 8192, 00:07:21.689 "large_bufsize": 135168 00:07:21.689 } 00:07:21.689 } 00:07:21.689 ] 00:07:21.689 }, 00:07:21.689 { 00:07:21.689 "subsystem": "sock", 00:07:21.689 "config": [ 00:07:21.689 { 00:07:21.689 "method": "sock_set_default_impl", 00:07:21.689 "params": { 00:07:21.689 "impl_name": "uring" 00:07:21.689 } 00:07:21.689 }, 00:07:21.689 { 00:07:21.689 "method": "sock_impl_set_options", 00:07:21.689 "params": { 00:07:21.689 "impl_name": "ssl", 00:07:21.689 "recv_buf_size": 4096, 00:07:21.689 "send_buf_size": 4096, 00:07:21.689 "enable_recv_pipe": true, 00:07:21.689 "enable_quickack": false, 00:07:21.689 "enable_placement_id": 0, 00:07:21.689 "enable_zerocopy_send_server": true, 00:07:21.689 "enable_zerocopy_send_client": false, 00:07:21.689 "zerocopy_threshold": 0, 00:07:21.689 "tls_version": 0, 00:07:21.689 "enable_ktls": false 00:07:21.689 } 00:07:21.689 }, 00:07:21.689 { 00:07:21.689 "method": "sock_impl_set_options", 00:07:21.689 "params": { 00:07:21.689 "impl_name": "posix", 00:07:21.689 "recv_buf_size": 2097152, 00:07:21.689 "send_buf_size": 2097152, 00:07:21.689 "enable_recv_pipe": true, 00:07:21.689 "enable_quickack": false, 00:07:21.689 "enable_placement_id": 0, 00:07:21.689 "enable_zerocopy_send_server": true, 00:07:21.689 "enable_zerocopy_send_client": false, 00:07:21.689 "zerocopy_threshold": 0, 00:07:21.689 "tls_version": 0, 00:07:21.689 "enable_ktls": false 00:07:21.689 } 00:07:21.689 }, 00:07:21.689 { 00:07:21.689 "method": "sock_impl_set_options", 00:07:21.689 "params": { 00:07:21.689 "impl_name": "uring", 00:07:21.689 "recv_buf_size": 2097152, 00:07:21.689 "send_buf_size": 2097152, 00:07:21.689 "enable_recv_pipe": true, 00:07:21.689 "enable_quickack": false, 00:07:21.689 "enable_placement_id": 0, 00:07:21.689 "enable_zerocopy_send_server": false, 00:07:21.689 "enable_zerocopy_send_client": false, 00:07:21.689 "zerocopy_threshold": 0, 00:07:21.689 "tls_version": 0, 00:07:21.689 "enable_ktls": false 00:07:21.689 } 00:07:21.689 } 00:07:21.689 ] 00:07:21.689 }, 00:07:21.689 { 00:07:21.689 "subsystem": "vmd", 00:07:21.689 "config": [] 00:07:21.689 }, 00:07:21.689 { 00:07:21.689 "subsystem": "accel", 00:07:21.689 "config": [ 00:07:21.689 { 00:07:21.689 "method": "accel_set_options", 00:07:21.689 "params": { 00:07:21.690 "small_cache_size": 128, 00:07:21.690 "large_cache_size": 16, 00:07:21.690 "task_count": 2048, 00:07:21.690 "sequence_count": 2048, 00:07:21.690 "buf_count": 2048 00:07:21.690 } 00:07:21.690 } 00:07:21.690 ] 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "subsystem": "bdev", 00:07:21.690 "config": [ 00:07:21.690 { 00:07:21.690 "method": "bdev_set_options", 00:07:21.690 "params": { 00:07:21.690 "bdev_io_pool_size": 65535, 00:07:21.690 "bdev_io_cache_size": 256, 00:07:21.690 "bdev_auto_examine": true, 00:07:21.690 "iobuf_small_cache_size": 128, 00:07:21.690 "iobuf_large_cache_size": 16 00:07:21.690 } 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "method": "bdev_raid_set_options", 00:07:21.690 "params": { 00:07:21.690 "process_window_size_kb": 1024, 00:07:21.690 "process_max_bandwidth_mb_sec": 0 00:07:21.690 } 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "method": "bdev_iscsi_set_options", 00:07:21.690 "params": { 00:07:21.690 "timeout_sec": 30 00:07:21.690 } 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "method": "bdev_nvme_set_options", 00:07:21.690 "params": { 00:07:21.690 "action_on_timeout": "none", 00:07:21.690 "timeout_us": 0, 00:07:21.690 "timeout_admin_us": 0, 00:07:21.690 "keep_alive_timeout_ms": 10000, 00:07:21.690 "arbitration_burst": 0, 00:07:21.690 "low_priority_weight": 0, 00:07:21.690 "medium_priority_weight": 0, 00:07:21.690 "high_priority_weight": 0, 00:07:21.690 "nvme_adminq_poll_period_us": 10000, 00:07:21.690 "nvme_ioq_poll_period_us": 0, 00:07:21.690 "io_queue_requests": 0, 00:07:21.690 "delay_cmd_submit": true, 00:07:21.690 "transport_retry_count": 4, 00:07:21.690 "bdev_retry_count": 3, 00:07:21.690 "transport_ack_timeout": 0, 00:07:21.690 "ctrlr_loss_timeout_sec": 0, 00:07:21.690 "reconnect_delay_sec": 0, 00:07:21.690 "fast_io_fail_timeout_sec": 0, 00:07:21.690 "disable_auto_failback": false, 00:07:21.690 "generate_uuids": false, 00:07:21.690 "transport_tos": 0, 00:07:21.690 "nvme_error_stat": false, 00:07:21.690 "rdma_srq_size": 0, 00:07:21.690 "io_path_stat": false, 00:07:21.690 "allow_accel_sequence": false, 00:07:21.690 "rdma_max_cq_size": 0, 00:07:21.690 "rdma_cm_event_timeout_ms": 0, 00:07:21.690 "dhchap_digests": [ 00:07:21.690 "sha256", 00:07:21.690 "sha384", 00:07:21.690 "sha512" 00:07:21.690 ], 00:07:21.690 "dhchap_dhgroups": [ 00:07:21.690 "null", 00:07:21.690 "ffdhe2048", 00:07:21.690 "ffdhe3072", 00:07:21.690 "ffdhe4096", 00:07:21.690 "ffdhe6144", 00:07:21.690 "ffdhe8192" 00:07:21.690 ] 00:07:21.690 } 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "method": "bdev_nvme_set_hotplug", 00:07:21.690 "params": { 00:07:21.690 "period_us": 100000, 00:07:21.690 "enable": false 00:07:21.690 } 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "method": "bdev_wait_for_examine" 00:07:21.690 } 00:07:21.690 ] 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "subsystem": "scsi", 00:07:21.690 "config": null 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "subsystem": "scheduler", 00:07:21.690 "config": [ 00:07:21.690 { 00:07:21.690 "method": "framework_set_scheduler", 00:07:21.690 "params": { 00:07:21.690 "name": "static" 00:07:21.690 } 00:07:21.690 } 00:07:21.690 ] 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "subsystem": "vhost_scsi", 00:07:21.690 "config": [] 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "subsystem": "vhost_blk", 00:07:21.690 "config": [] 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "subsystem": "ublk", 00:07:21.690 "config": [] 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "subsystem": "nbd", 00:07:21.690 "config": [] 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "subsystem": "nvmf", 00:07:21.690 "config": [ 00:07:21.690 { 00:07:21.690 "method": "nvmf_set_config", 00:07:21.690 "params": { 00:07:21.690 "discovery_filter": "match_any", 00:07:21.690 "admin_cmd_passthru": { 00:07:21.690 "identify_ctrlr": false 00:07:21.690 } 00:07:21.690 } 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "method": "nvmf_set_max_subsystems", 00:07:21.690 "params": { 00:07:21.690 "max_subsystems": 1024 00:07:21.690 } 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "method": "nvmf_set_crdt", 00:07:21.690 "params": { 00:07:21.690 "crdt1": 0, 00:07:21.690 "crdt2": 0, 00:07:21.690 "crdt3": 0 00:07:21.690 } 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "method": "nvmf_create_transport", 00:07:21.690 "params": { 00:07:21.690 "trtype": "TCP", 00:07:21.690 "max_queue_depth": 128, 00:07:21.690 "max_io_qpairs_per_ctrlr": 127, 00:07:21.690 "in_capsule_data_size": 4096, 00:07:21.690 "max_io_size": 131072, 00:07:21.690 "io_unit_size": 131072, 00:07:21.690 "max_aq_depth": 128, 00:07:21.690 "num_shared_buffers": 511, 00:07:21.690 "buf_cache_size": 4294967295, 00:07:21.690 "dif_insert_or_strip": false, 00:07:21.690 "zcopy": false, 00:07:21.690 "c2h_success": true, 00:07:21.690 "sock_priority": 0, 00:07:21.690 "abort_timeout_sec": 1, 00:07:21.690 "ack_timeout": 0, 00:07:21.690 "data_wr_pool_size": 0 00:07:21.690 } 00:07:21.690 } 00:07:21.690 ] 00:07:21.690 }, 00:07:21.690 { 00:07:21.690 "subsystem": "iscsi", 00:07:21.690 "config": [ 00:07:21.690 { 00:07:21.690 "method": "iscsi_set_options", 00:07:21.690 "params": { 00:07:21.690 "node_base": "iqn.2016-06.io.spdk", 00:07:21.690 "max_sessions": 128, 00:07:21.690 "max_connections_per_session": 2, 00:07:21.690 "max_queue_depth": 64, 00:07:21.690 "default_time2wait": 2, 00:07:21.690 "default_time2retain": 20, 00:07:21.690 "first_burst_length": 8192, 00:07:21.690 "immediate_data": true, 00:07:21.690 "allow_duplicated_isid": false, 00:07:21.690 "error_recovery_level": 0, 00:07:21.690 "nop_timeout": 60, 00:07:21.690 "nop_in_interval": 30, 00:07:21.690 "disable_chap": false, 00:07:21.690 "require_chap": false, 00:07:21.690 "mutual_chap": false, 00:07:21.690 "chap_group": 0, 00:07:21.690 "max_large_datain_per_connection": 64, 00:07:21.690 "max_r2t_per_connection": 4, 00:07:21.690 "pdu_pool_size": 36864, 00:07:21.690 "immediate_data_pool_size": 16384, 00:07:21.690 "data_out_pool_size": 2048 00:07:21.690 } 00:07:21.690 } 00:07:21.690 ] 00:07:21.690 } 00:07:21.690 ] 00:07:21.690 } 00:07:21.690 13:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:21.690 13:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59347 00:07:21.690 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59347 ']' 00:07:21.690 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59347 00:07:21.690 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:21.690 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.949 13:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59347 00:07:21.949 13:55:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.949 13:55:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.949 killing process with pid 59347 00:07:21.949 13:55:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59347' 00:07:21.949 13:55:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59347 00:07:21.949 13:55:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59347 00:07:22.207 13:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59375 00:07:22.207 13:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:22.207 13:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59375 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59375 ']' 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59375 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59375 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.470 killing process with pid 59375 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59375' 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59375 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59375 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:27.470 00:07:27.470 real 0m6.833s 00:07:27.470 user 0m6.537s 00:07:27.470 sys 0m0.567s 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:27.470 ************************************ 00:07:27.470 END TEST skip_rpc_with_json 00:07:27.470 ************************************ 00:07:27.470 13:55:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:27.470 13:55:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.470 13:55:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.470 13:55:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.470 ************************************ 00:07:27.470 START TEST skip_rpc_with_delay 00:07:27.470 ************************************ 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:27.470 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:27.471 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:27.728 [2024-07-25 13:55:36.834603] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:27.728 [2024-07-25 13:55:36.834733] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:27.728 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:27.728 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.728 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.728 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.728 00:07:27.728 real 0m0.087s 00:07:27.728 user 0m0.046s 00:07:27.728 sys 0m0.039s 00:07:27.728 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.728 13:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:27.728 ************************************ 00:07:27.728 END TEST skip_rpc_with_delay 00:07:27.728 ************************************ 00:07:27.728 13:55:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:27.728 13:55:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:27.728 13:55:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:27.728 13:55:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.728 13:55:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.728 13:55:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.728 ************************************ 00:07:27.728 START TEST exit_on_failed_rpc_init 00:07:27.728 ************************************ 00:07:27.728 13:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:27.728 13:55:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.728 13:55:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59479 00:07:27.728 13:55:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59479 00:07:27.728 13:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59479 ']' 00:07:27.728 13:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.728 13:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.728 13:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.728 13:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.728 13:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:27.728 [2024-07-25 13:55:36.971557] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:27.728 [2024-07-25 13:55:36.971645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59479 ] 00:07:27.987 [2024-07-25 13:55:37.117321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.987 [2024-07-25 13:55:37.211206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.987 [2024-07-25 13:55:37.251120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:28.557 13:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:28.816 [2024-07-25 13:55:37.901519] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:28.816 [2024-07-25 13:55:37.901590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59497 ] 00:07:28.816 [2024-07-25 13:55:38.042253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.075 [2024-07-25 13:55:38.191995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.075 [2024-07-25 13:55:38.192104] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:29.075 [2024-07-25 13:55:38.192126] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:29.075 [2024-07-25 13:55:38.192132] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59479 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59479 ']' 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59479 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59479 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.075 killing process with pid 59479 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59479' 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59479 00:07:29.075 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59479 00:07:29.643 00:07:29.643 real 0m1.779s 00:07:29.643 user 0m2.112s 00:07:29.643 sys 0m0.390s 00:07:29.643 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.643 ************************************ 00:07:29.643 END TEST exit_on_failed_rpc_init 00:07:29.643 ************************************ 00:07:29.643 13:55:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:29.643 13:55:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:29.643 00:07:29.643 real 0m14.416s 00:07:29.643 user 0m13.872s 00:07:29.643 sys 0m1.463s 00:07:29.643 13:55:38 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.643 13:55:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.643 ************************************ 00:07:29.643 END TEST skip_rpc 00:07:29.643 ************************************ 00:07:29.643 13:55:38 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:29.644 13:55:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.644 13:55:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.644 13:55:38 -- common/autotest_common.sh@10 -- # set +x 00:07:29.644 ************************************ 00:07:29.644 START TEST rpc_client 00:07:29.644 ************************************ 00:07:29.644 13:55:38 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:29.644 * Looking for test storage... 00:07:29.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:29.644 13:55:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:29.904 OK 00:07:29.904 13:55:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:29.904 00:07:29.904 real 0m0.149s 00:07:29.904 user 0m0.069s 00:07:29.904 sys 0m0.089s 00:07:29.904 13:55:38 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.904 13:55:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:29.904 ************************************ 00:07:29.904 END TEST rpc_client 00:07:29.904 ************************************ 00:07:29.904 13:55:39 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:29.904 13:55:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.904 13:55:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.904 13:55:39 -- common/autotest_common.sh@10 -- # set +x 00:07:29.904 ************************************ 00:07:29.904 START TEST json_config 00:07:29.904 ************************************ 00:07:29.904 13:55:39 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:29.904 13:55:39 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.904 13:55:39 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.904 13:55:39 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.904 13:55:39 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.904 13:55:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.904 13:55:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.904 13:55:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.904 13:55:39 json_config -- paths/export.sh@5 -- # export PATH 00:07:29.904 13:55:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@47 -- # : 0 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.904 13:55:39 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.904 13:55:39 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:29.904 13:55:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:29.904 13:55:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:29.904 13:55:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:29.905 INFO: JSON configuration test init 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:29.905 13:55:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:29.905 13:55:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:29.905 13:55:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:29.905 13:55:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:29.905 13:55:39 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:29.905 13:55:39 json_config -- json_config/common.sh@9 -- # local app=target 00:07:29.905 13:55:39 json_config -- json_config/common.sh@10 -- # shift 00:07:29.905 13:55:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:29.905 13:55:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:29.905 13:55:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:29.905 13:55:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:29.905 13:55:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:29.905 Waiting for target to run... 00:07:29.905 13:55:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59620 00:07:29.905 13:55:39 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:29.905 13:55:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:29.905 13:55:39 json_config -- json_config/common.sh@25 -- # waitforlisten 59620 /var/tmp/spdk_tgt.sock 00:07:29.905 13:55:39 json_config -- common/autotest_common.sh@831 -- # '[' -z 59620 ']' 00:07:29.905 13:55:39 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:29.905 13:55:39 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.905 13:55:39 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:29.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:29.905 13:55:39 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.905 13:55:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:29.905 [2024-07-25 13:55:39.189956] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:29.905 [2024-07-25 13:55:39.190028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59620 ] 00:07:30.473 [2024-07-25 13:55:39.637486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.473 [2024-07-25 13:55:39.720504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.040 13:55:40 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.040 13:55:40 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:31.040 13:55:40 json_config -- json_config/common.sh@26 -- # echo '' 00:07:31.040 00:07:31.040 13:55:40 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:31.040 13:55:40 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:31.040 13:55:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:31.040 13:55:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:31.040 13:55:40 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:07:31.040 13:55:40 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:31.040 13:55:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:31.040 13:55:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:31.040 13:55:40 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:31.040 13:55:40 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:31.040 13:55:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:31.299 [2024-07-25 13:55:40.391128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:31.564 13:55:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:31.564 13:55:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:31.564 13:55:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@51 -- # sort 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:31.564 13:55:40 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:31.564 13:55:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:31.564 13:55:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:31.835 13:55:40 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:31.835 13:55:40 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:31.835 13:55:40 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:31.835 13:55:40 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:31.835 13:55:40 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:07:31.835 13:55:40 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:07:31.835 13:55:40 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:07:31.835 13:55:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:31.835 13:55:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:31.835 13:55:40 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:31.835 13:55:40 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:07:31.835 13:55:40 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:07:31.835 13:55:40 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:31.835 13:55:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:31.835 MallocForNvmf0 00:07:31.835 13:55:41 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:31.835 13:55:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:32.094 MallocForNvmf1 00:07:32.094 13:55:41 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:32.094 13:55:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:32.353 [2024-07-25 13:55:41.568448] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.353 13:55:41 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:32.353 13:55:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:32.612 13:55:41 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:32.612 13:55:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:32.872 13:55:42 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:32.872 13:55:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:33.131 13:55:42 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:33.132 13:55:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:33.390 [2024-07-25 13:55:42.439192] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:33.390 13:55:42 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:07:33.390 13:55:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.390 13:55:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:33.390 13:55:42 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:33.390 13:55:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.390 13:55:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:33.390 13:55:42 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:33.390 13:55:42 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:33.390 13:55:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:33.649 MallocBdevForConfigChangeCheck 00:07:33.649 13:55:42 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:33.649 13:55:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.649 13:55:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:33.649 13:55:42 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:33.649 13:55:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:34.221 INFO: shutting down applications... 00:07:34.221 13:55:43 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:34.221 13:55:43 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:34.221 13:55:43 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:34.221 13:55:43 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:34.221 13:55:43 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:34.221 Calling clear_iscsi_subsystem 00:07:34.221 Calling clear_nvmf_subsystem 00:07:34.221 Calling clear_nbd_subsystem 00:07:34.221 Calling clear_ublk_subsystem 00:07:34.221 Calling clear_vhost_blk_subsystem 00:07:34.221 Calling clear_vhost_scsi_subsystem 00:07:34.221 Calling clear_bdev_subsystem 00:07:34.480 13:55:43 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:34.480 13:55:43 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:34.480 13:55:43 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:34.480 13:55:43 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:34.480 13:55:43 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:34.480 13:55:43 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:34.739 13:55:43 json_config -- json_config/json_config.sh@349 -- # break 00:07:34.739 13:55:43 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:34.739 13:55:43 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:34.739 13:55:43 json_config -- json_config/common.sh@31 -- # local app=target 00:07:34.739 13:55:43 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:34.739 13:55:43 json_config -- json_config/common.sh@35 -- # [[ -n 59620 ]] 00:07:34.739 13:55:43 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59620 00:07:34.739 13:55:43 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:34.739 13:55:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:34.739 13:55:43 json_config -- json_config/common.sh@41 -- # kill -0 59620 00:07:34.739 13:55:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:35.307 13:55:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:35.307 13:55:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:35.307 13:55:44 json_config -- json_config/common.sh@41 -- # kill -0 59620 00:07:35.307 13:55:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:35.307 13:55:44 json_config -- json_config/common.sh@43 -- # break 00:07:35.307 13:55:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:35.307 13:55:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:35.307 SPDK target shutdown done 00:07:35.307 13:55:44 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:35.307 INFO: relaunching applications... 00:07:35.307 13:55:44 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:35.307 13:55:44 json_config -- json_config/common.sh@9 -- # local app=target 00:07:35.307 13:55:44 json_config -- json_config/common.sh@10 -- # shift 00:07:35.307 13:55:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:35.307 13:55:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:35.307 13:55:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:35.307 13:55:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:35.307 13:55:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:35.307 13:55:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59810 00:07:35.307 13:55:44 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:35.307 13:55:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:35.307 Waiting for target to run... 00:07:35.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:35.307 13:55:44 json_config -- json_config/common.sh@25 -- # waitforlisten 59810 /var/tmp/spdk_tgt.sock 00:07:35.307 13:55:44 json_config -- common/autotest_common.sh@831 -- # '[' -z 59810 ']' 00:07:35.307 13:55:44 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:35.307 13:55:44 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.307 13:55:44 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:35.307 13:55:44 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.307 13:55:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.307 [2024-07-25 13:55:44.504561] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:35.307 [2024-07-25 13:55:44.504625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59810 ] 00:07:35.565 [2024-07-25 13:55:44.850090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.824 [2024-07-25 13:55:44.933415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.824 [2024-07-25 13:55:45.058819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.082 [2024-07-25 13:55:45.260502] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.082 [2024-07-25 13:55:45.292532] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:36.082 13:55:45 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.082 13:55:45 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:36.082 13:55:45 json_config -- json_config/common.sh@26 -- # echo '' 00:07:36.082 00:07:36.082 13:55:45 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:36.082 13:55:45 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:36.082 INFO: Checking if target configuration is the same... 00:07:36.082 13:55:45 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:36.082 13:55:45 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:36.082 13:55:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:36.082 + '[' 2 -ne 2 ']' 00:07:36.082 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:36.342 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:36.342 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:36.342 +++ basename /dev/fd/62 00:07:36.342 ++ mktemp /tmp/62.XXX 00:07:36.342 + tmp_file_1=/tmp/62.vPO 00:07:36.342 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:36.342 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:36.342 + tmp_file_2=/tmp/spdk_tgt_config.json.jF8 00:07:36.342 + ret=0 00:07:36.342 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:36.601 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:36.601 + diff -u /tmp/62.vPO /tmp/spdk_tgt_config.json.jF8 00:07:36.601 INFO: JSON config files are the same 00:07:36.601 + echo 'INFO: JSON config files are the same' 00:07:36.601 + rm /tmp/62.vPO /tmp/spdk_tgt_config.json.jF8 00:07:36.601 + exit 0 00:07:36.601 INFO: changing configuration and checking if this can be detected... 00:07:36.601 13:55:45 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:36.601 13:55:45 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:36.601 13:55:45 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:36.601 13:55:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:36.859 13:55:46 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:36.859 13:55:46 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:36.859 13:55:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:36.859 + '[' 2 -ne 2 ']' 00:07:36.859 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:36.859 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:36.859 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:36.859 +++ basename /dev/fd/62 00:07:36.859 ++ mktemp /tmp/62.XXX 00:07:36.859 + tmp_file_1=/tmp/62.1u7 00:07:36.859 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:36.859 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:36.859 + tmp_file_2=/tmp/spdk_tgt_config.json.NUL 00:07:36.859 + ret=0 00:07:36.859 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:37.426 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:37.426 + diff -u /tmp/62.1u7 /tmp/spdk_tgt_config.json.NUL 00:07:37.426 + ret=1 00:07:37.426 + echo '=== Start of file: /tmp/62.1u7 ===' 00:07:37.426 + cat /tmp/62.1u7 00:07:37.426 + echo '=== End of file: /tmp/62.1u7 ===' 00:07:37.426 + echo '' 00:07:37.426 + echo '=== Start of file: /tmp/spdk_tgt_config.json.NUL ===' 00:07:37.426 + cat /tmp/spdk_tgt_config.json.NUL 00:07:37.426 + echo '=== End of file: /tmp/spdk_tgt_config.json.NUL ===' 00:07:37.426 + echo '' 00:07:37.426 + rm /tmp/62.1u7 /tmp/spdk_tgt_config.json.NUL 00:07:37.426 + exit 1 00:07:37.426 INFO: configuration change detected. 00:07:37.426 13:55:46 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:37.426 13:55:46 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:37.426 13:55:46 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:37.426 13:55:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.426 13:55:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.426 13:55:46 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:37.426 13:55:46 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:37.427 13:55:46 json_config -- json_config/json_config.sh@321 -- # [[ -n 59810 ]] 00:07:37.427 13:55:46 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:37.427 13:55:46 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.427 13:55:46 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:07:37.427 13:55:46 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:37.427 13:55:46 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:37.427 13:55:46 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:37.427 13:55:46 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:37.427 13:55:46 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.427 13:55:46 json_config -- json_config/json_config.sh@327 -- # killprocess 59810 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@950 -- # '[' -z 59810 ']' 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@954 -- # kill -0 59810 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@955 -- # uname 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59810 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.427 killing process with pid 59810 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59810' 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@969 -- # kill 59810 00:07:37.427 13:55:46 json_config -- common/autotest_common.sh@974 -- # wait 59810 00:07:37.685 13:55:46 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:37.685 13:55:46 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:37.685 13:55:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.685 13:55:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.685 INFO: Success 00:07:37.685 13:55:46 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:37.685 13:55:46 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:37.685 ************************************ 00:07:37.685 END TEST json_config 00:07:37.685 ************************************ 00:07:37.685 00:07:37.685 real 0m7.945s 00:07:37.685 user 0m11.028s 00:07:37.685 sys 0m1.883s 00:07:37.685 13:55:46 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.685 13:55:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.945 13:55:47 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:37.945 13:55:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.945 13:55:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.945 13:55:47 -- common/autotest_common.sh@10 -- # set +x 00:07:37.945 ************************************ 00:07:37.945 START TEST json_config_extra_key 00:07:37.945 ************************************ 00:07:37.945 13:55:47 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:37.945 13:55:47 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.945 13:55:47 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.945 13:55:47 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.945 13:55:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.945 13:55:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.945 13:55:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.945 13:55:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:37.945 13:55:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.945 13:55:47 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:37.945 INFO: launching applications... 00:07:37.945 13:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:37.945 13:55:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:37.945 13:55:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:37.945 13:55:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:37.945 13:55:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:37.945 13:55:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:37.945 13:55:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:37.945 13:55:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:37.945 13:55:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59951 00:07:37.945 13:55:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:37.945 Waiting for target to run... 00:07:37.945 13:55:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59951 /var/tmp/spdk_tgt.sock 00:07:37.945 13:55:47 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59951 ']' 00:07:37.945 13:55:47 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:37.945 13:55:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:37.945 13:55:47 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.945 13:55:47 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:37.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:37.945 13:55:47 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.945 13:55:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:37.945 [2024-07-25 13:55:47.202524] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:37.945 [2024-07-25 13:55:47.203185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59951 ] 00:07:38.514 [2024-07-25 13:55:47.737439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.774 [2024-07-25 13:55:47.850365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.774 [2024-07-25 13:55:47.874943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.774 13:55:48 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.774 00:07:38.774 INFO: shutting down applications... 00:07:38.774 13:55:48 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:38.774 13:55:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:38.774 13:55:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:38.774 13:55:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:38.774 13:55:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:38.774 13:55:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:38.774 13:55:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59951 ]] 00:07:38.774 13:55:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59951 00:07:38.774 13:55:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:38.774 13:55:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:38.774 13:55:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59951 00:07:38.774 13:55:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:39.345 13:55:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:39.345 13:55:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:39.345 13:55:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59951 00:07:39.345 13:55:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:39.345 13:55:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:39.345 13:55:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:39.345 SPDK target shutdown done 00:07:39.345 Success 00:07:39.345 13:55:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:39.345 13:55:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:39.345 00:07:39.345 real 0m1.554s 00:07:39.345 user 0m1.142s 00:07:39.345 sys 0m0.565s 00:07:39.345 13:55:48 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.345 13:55:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:39.345 ************************************ 00:07:39.345 END TEST json_config_extra_key 00:07:39.345 ************************************ 00:07:39.345 13:55:48 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:39.345 13:55:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.345 13:55:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.345 13:55:48 -- common/autotest_common.sh@10 -- # set +x 00:07:39.345 ************************************ 00:07:39.345 START TEST alias_rpc 00:07:39.345 ************************************ 00:07:39.345 13:55:48 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:39.605 * Looking for test storage... 00:07:39.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:39.605 13:55:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:39.605 13:55:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60021 00:07:39.605 13:55:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:39.605 13:55:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60021 00:07:39.605 13:55:48 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 60021 ']' 00:07:39.605 13:55:48 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.605 13:55:48 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.605 13:55:48 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.605 13:55:48 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.605 13:55:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.605 [2024-07-25 13:55:48.817251] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:39.605 [2024-07-25 13:55:48.817468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60021 ] 00:07:39.923 [2024-07-25 13:55:48.958709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.923 [2024-07-25 13:55:49.056519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.923 [2024-07-25 13:55:49.097197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.506 13:55:49 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.506 13:55:49 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:40.506 13:55:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:40.765 13:55:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60021 00:07:40.765 13:55:49 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 60021 ']' 00:07:40.765 13:55:49 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 60021 00:07:40.765 13:55:49 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:40.765 13:55:49 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.765 13:55:49 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60021 00:07:40.765 killing process with pid 60021 00:07:40.765 13:55:49 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.765 13:55:49 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.765 13:55:49 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60021' 00:07:40.765 13:55:49 alias_rpc -- common/autotest_common.sh@969 -- # kill 60021 00:07:40.765 13:55:49 alias_rpc -- common/autotest_common.sh@974 -- # wait 60021 00:07:41.024 ************************************ 00:07:41.024 END TEST alias_rpc 00:07:41.024 ************************************ 00:07:41.024 00:07:41.024 real 0m1.665s 00:07:41.024 user 0m1.834s 00:07:41.024 sys 0m0.391s 00:07:41.024 13:55:50 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.024 13:55:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.283 13:55:50 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:41.283 13:55:50 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:41.283 13:55:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.283 13:55:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.284 13:55:50 -- common/autotest_common.sh@10 -- # set +x 00:07:41.284 ************************************ 00:07:41.284 START TEST spdkcli_tcp 00:07:41.284 ************************************ 00:07:41.284 13:55:50 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:41.284 * Looking for test storage... 00:07:41.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:41.284 13:55:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:41.284 13:55:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:41.284 13:55:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:41.284 13:55:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:41.284 13:55:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:41.284 13:55:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:41.284 13:55:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:41.284 13:55:50 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.284 13:55:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.284 13:55:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60087 00:07:41.284 13:55:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60087 00:07:41.284 13:55:50 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 60087 ']' 00:07:41.284 13:55:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.284 13:55:50 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.284 13:55:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:41.284 13:55:50 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.284 13:55:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.284 13:55:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.284 [2024-07-25 13:55:50.531884] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:41.284 [2024-07-25 13:55:50.531970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60087 ] 00:07:41.543 [2024-07-25 13:55:50.671799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:41.543 [2024-07-25 13:55:50.771647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.543 [2024-07-25 13:55:50.771647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.543 [2024-07-25 13:55:50.813011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.112 13:55:51 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.112 13:55:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:42.112 13:55:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60104 00:07:42.112 13:55:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:42.112 13:55:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:42.372 [ 00:07:42.372 "bdev_malloc_delete", 00:07:42.372 "bdev_malloc_create", 00:07:42.372 "bdev_null_resize", 00:07:42.372 "bdev_null_delete", 00:07:42.372 "bdev_null_create", 00:07:42.372 "bdev_nvme_cuse_unregister", 00:07:42.372 "bdev_nvme_cuse_register", 00:07:42.372 "bdev_opal_new_user", 00:07:42.372 "bdev_opal_set_lock_state", 00:07:42.372 "bdev_opal_delete", 00:07:42.372 "bdev_opal_get_info", 00:07:42.372 "bdev_opal_create", 00:07:42.372 "bdev_nvme_opal_revert", 00:07:42.372 "bdev_nvme_opal_init", 00:07:42.372 "bdev_nvme_send_cmd", 00:07:42.372 "bdev_nvme_get_path_iostat", 00:07:42.372 "bdev_nvme_get_mdns_discovery_info", 00:07:42.372 "bdev_nvme_stop_mdns_discovery", 00:07:42.372 "bdev_nvme_start_mdns_discovery", 00:07:42.372 "bdev_nvme_set_multipath_policy", 00:07:42.372 "bdev_nvme_set_preferred_path", 00:07:42.372 "bdev_nvme_get_io_paths", 00:07:42.372 "bdev_nvme_remove_error_injection", 00:07:42.372 "bdev_nvme_add_error_injection", 00:07:42.372 "bdev_nvme_get_discovery_info", 00:07:42.372 "bdev_nvme_stop_discovery", 00:07:42.372 "bdev_nvme_start_discovery", 00:07:42.372 "bdev_nvme_get_controller_health_info", 00:07:42.372 "bdev_nvme_disable_controller", 00:07:42.372 "bdev_nvme_enable_controller", 00:07:42.372 "bdev_nvme_reset_controller", 00:07:42.372 "bdev_nvme_get_transport_statistics", 00:07:42.372 "bdev_nvme_apply_firmware", 00:07:42.372 "bdev_nvme_detach_controller", 00:07:42.372 "bdev_nvme_get_controllers", 00:07:42.372 "bdev_nvme_attach_controller", 00:07:42.372 "bdev_nvme_set_hotplug", 00:07:42.372 "bdev_nvme_set_options", 00:07:42.372 "bdev_passthru_delete", 00:07:42.372 "bdev_passthru_create", 00:07:42.372 "bdev_lvol_set_parent_bdev", 00:07:42.372 "bdev_lvol_set_parent", 00:07:42.372 "bdev_lvol_check_shallow_copy", 00:07:42.372 "bdev_lvol_start_shallow_copy", 00:07:42.372 "bdev_lvol_grow_lvstore", 00:07:42.372 "bdev_lvol_get_lvols", 00:07:42.372 "bdev_lvol_get_lvstores", 00:07:42.372 "bdev_lvol_delete", 00:07:42.372 "bdev_lvol_set_read_only", 00:07:42.372 "bdev_lvol_resize", 00:07:42.372 "bdev_lvol_decouple_parent", 00:07:42.372 "bdev_lvol_inflate", 00:07:42.372 "bdev_lvol_rename", 00:07:42.372 "bdev_lvol_clone_bdev", 00:07:42.372 "bdev_lvol_clone", 00:07:42.372 "bdev_lvol_snapshot", 00:07:42.372 "bdev_lvol_create", 00:07:42.372 "bdev_lvol_delete_lvstore", 00:07:42.372 "bdev_lvol_rename_lvstore", 00:07:42.372 "bdev_lvol_create_lvstore", 00:07:42.372 "bdev_raid_set_options", 00:07:42.372 "bdev_raid_remove_base_bdev", 00:07:42.372 "bdev_raid_add_base_bdev", 00:07:42.372 "bdev_raid_delete", 00:07:42.372 "bdev_raid_create", 00:07:42.372 "bdev_raid_get_bdevs", 00:07:42.372 "bdev_error_inject_error", 00:07:42.372 "bdev_error_delete", 00:07:42.372 "bdev_error_create", 00:07:42.372 "bdev_split_delete", 00:07:42.372 "bdev_split_create", 00:07:42.372 "bdev_delay_delete", 00:07:42.372 "bdev_delay_create", 00:07:42.372 "bdev_delay_update_latency", 00:07:42.372 "bdev_zone_block_delete", 00:07:42.372 "bdev_zone_block_create", 00:07:42.372 "blobfs_create", 00:07:42.372 "blobfs_detect", 00:07:42.372 "blobfs_set_cache_size", 00:07:42.372 "bdev_aio_delete", 00:07:42.372 "bdev_aio_rescan", 00:07:42.372 "bdev_aio_create", 00:07:42.372 "bdev_ftl_set_property", 00:07:42.372 "bdev_ftl_get_properties", 00:07:42.372 "bdev_ftl_get_stats", 00:07:42.372 "bdev_ftl_unmap", 00:07:42.372 "bdev_ftl_unload", 00:07:42.372 "bdev_ftl_delete", 00:07:42.372 "bdev_ftl_load", 00:07:42.372 "bdev_ftl_create", 00:07:42.372 "bdev_virtio_attach_controller", 00:07:42.372 "bdev_virtio_scsi_get_devices", 00:07:42.372 "bdev_virtio_detach_controller", 00:07:42.372 "bdev_virtio_blk_set_hotplug", 00:07:42.372 "bdev_iscsi_delete", 00:07:42.372 "bdev_iscsi_create", 00:07:42.372 "bdev_iscsi_set_options", 00:07:42.372 "bdev_uring_delete", 00:07:42.372 "bdev_uring_rescan", 00:07:42.372 "bdev_uring_create", 00:07:42.372 "accel_error_inject_error", 00:07:42.372 "ioat_scan_accel_module", 00:07:42.372 "dsa_scan_accel_module", 00:07:42.372 "iaa_scan_accel_module", 00:07:42.372 "keyring_file_remove_key", 00:07:42.372 "keyring_file_add_key", 00:07:42.372 "keyring_linux_set_options", 00:07:42.372 "iscsi_get_histogram", 00:07:42.372 "iscsi_enable_histogram", 00:07:42.372 "iscsi_set_options", 00:07:42.372 "iscsi_get_auth_groups", 00:07:42.372 "iscsi_auth_group_remove_secret", 00:07:42.372 "iscsi_auth_group_add_secret", 00:07:42.372 "iscsi_delete_auth_group", 00:07:42.372 "iscsi_create_auth_group", 00:07:42.372 "iscsi_set_discovery_auth", 00:07:42.372 "iscsi_get_options", 00:07:42.372 "iscsi_target_node_request_logout", 00:07:42.372 "iscsi_target_node_set_redirect", 00:07:42.372 "iscsi_target_node_set_auth", 00:07:42.372 "iscsi_target_node_add_lun", 00:07:42.372 "iscsi_get_stats", 00:07:42.372 "iscsi_get_connections", 00:07:42.372 "iscsi_portal_group_set_auth", 00:07:42.372 "iscsi_start_portal_group", 00:07:42.372 "iscsi_delete_portal_group", 00:07:42.372 "iscsi_create_portal_group", 00:07:42.372 "iscsi_get_portal_groups", 00:07:42.372 "iscsi_delete_target_node", 00:07:42.372 "iscsi_target_node_remove_pg_ig_maps", 00:07:42.372 "iscsi_target_node_add_pg_ig_maps", 00:07:42.372 "iscsi_create_target_node", 00:07:42.372 "iscsi_get_target_nodes", 00:07:42.372 "iscsi_delete_initiator_group", 00:07:42.372 "iscsi_initiator_group_remove_initiators", 00:07:42.372 "iscsi_initiator_group_add_initiators", 00:07:42.372 "iscsi_create_initiator_group", 00:07:42.372 "iscsi_get_initiator_groups", 00:07:42.372 "nvmf_set_crdt", 00:07:42.372 "nvmf_set_config", 00:07:42.372 "nvmf_set_max_subsystems", 00:07:42.372 "nvmf_stop_mdns_prr", 00:07:42.372 "nvmf_publish_mdns_prr", 00:07:42.372 "nvmf_subsystem_get_listeners", 00:07:42.372 "nvmf_subsystem_get_qpairs", 00:07:42.372 "nvmf_subsystem_get_controllers", 00:07:42.372 "nvmf_get_stats", 00:07:42.372 "nvmf_get_transports", 00:07:42.372 "nvmf_create_transport", 00:07:42.372 "nvmf_get_targets", 00:07:42.372 "nvmf_delete_target", 00:07:42.372 "nvmf_create_target", 00:07:42.372 "nvmf_subsystem_allow_any_host", 00:07:42.372 "nvmf_subsystem_remove_host", 00:07:42.372 "nvmf_subsystem_add_host", 00:07:42.372 "nvmf_ns_remove_host", 00:07:42.372 "nvmf_ns_add_host", 00:07:42.372 "nvmf_subsystem_remove_ns", 00:07:42.372 "nvmf_subsystem_add_ns", 00:07:42.372 "nvmf_subsystem_listener_set_ana_state", 00:07:42.372 "nvmf_discovery_get_referrals", 00:07:42.372 "nvmf_discovery_remove_referral", 00:07:42.372 "nvmf_discovery_add_referral", 00:07:42.372 "nvmf_subsystem_remove_listener", 00:07:42.372 "nvmf_subsystem_add_listener", 00:07:42.372 "nvmf_delete_subsystem", 00:07:42.373 "nvmf_create_subsystem", 00:07:42.373 "nvmf_get_subsystems", 00:07:42.373 "env_dpdk_get_mem_stats", 00:07:42.373 "nbd_get_disks", 00:07:42.373 "nbd_stop_disk", 00:07:42.373 "nbd_start_disk", 00:07:42.373 "ublk_recover_disk", 00:07:42.373 "ublk_get_disks", 00:07:42.373 "ublk_stop_disk", 00:07:42.373 "ublk_start_disk", 00:07:42.373 "ublk_destroy_target", 00:07:42.373 "ublk_create_target", 00:07:42.373 "virtio_blk_create_transport", 00:07:42.373 "virtio_blk_get_transports", 00:07:42.373 "vhost_controller_set_coalescing", 00:07:42.373 "vhost_get_controllers", 00:07:42.373 "vhost_delete_controller", 00:07:42.373 "vhost_create_blk_controller", 00:07:42.373 "vhost_scsi_controller_remove_target", 00:07:42.373 "vhost_scsi_controller_add_target", 00:07:42.373 "vhost_start_scsi_controller", 00:07:42.373 "vhost_create_scsi_controller", 00:07:42.373 "thread_set_cpumask", 00:07:42.373 "framework_get_governor", 00:07:42.373 "framework_get_scheduler", 00:07:42.373 "framework_set_scheduler", 00:07:42.373 "framework_get_reactors", 00:07:42.373 "thread_get_io_channels", 00:07:42.373 "thread_get_pollers", 00:07:42.373 "thread_get_stats", 00:07:42.373 "framework_monitor_context_switch", 00:07:42.373 "spdk_kill_instance", 00:07:42.373 "log_enable_timestamps", 00:07:42.373 "log_get_flags", 00:07:42.373 "log_clear_flag", 00:07:42.373 "log_set_flag", 00:07:42.373 "log_get_level", 00:07:42.373 "log_set_level", 00:07:42.373 "log_get_print_level", 00:07:42.373 "log_set_print_level", 00:07:42.373 "framework_enable_cpumask_locks", 00:07:42.373 "framework_disable_cpumask_locks", 00:07:42.373 "framework_wait_init", 00:07:42.373 "framework_start_init", 00:07:42.373 "scsi_get_devices", 00:07:42.373 "bdev_get_histogram", 00:07:42.373 "bdev_enable_histogram", 00:07:42.373 "bdev_set_qos_limit", 00:07:42.373 "bdev_set_qd_sampling_period", 00:07:42.373 "bdev_get_bdevs", 00:07:42.373 "bdev_reset_iostat", 00:07:42.373 "bdev_get_iostat", 00:07:42.373 "bdev_examine", 00:07:42.373 "bdev_wait_for_examine", 00:07:42.373 "bdev_set_options", 00:07:42.373 "notify_get_notifications", 00:07:42.373 "notify_get_types", 00:07:42.373 "accel_get_stats", 00:07:42.373 "accel_set_options", 00:07:42.373 "accel_set_driver", 00:07:42.373 "accel_crypto_key_destroy", 00:07:42.373 "accel_crypto_keys_get", 00:07:42.373 "accel_crypto_key_create", 00:07:42.373 "accel_assign_opc", 00:07:42.373 "accel_get_module_info", 00:07:42.373 "accel_get_opc_assignments", 00:07:42.373 "vmd_rescan", 00:07:42.373 "vmd_remove_device", 00:07:42.373 "vmd_enable", 00:07:42.373 "sock_get_default_impl", 00:07:42.373 "sock_set_default_impl", 00:07:42.373 "sock_impl_set_options", 00:07:42.373 "sock_impl_get_options", 00:07:42.373 "iobuf_get_stats", 00:07:42.373 "iobuf_set_options", 00:07:42.373 "framework_get_pci_devices", 00:07:42.373 "framework_get_config", 00:07:42.373 "framework_get_subsystems", 00:07:42.373 "trace_get_info", 00:07:42.373 "trace_get_tpoint_group_mask", 00:07:42.373 "trace_disable_tpoint_group", 00:07:42.373 "trace_enable_tpoint_group", 00:07:42.373 "trace_clear_tpoint_mask", 00:07:42.373 "trace_set_tpoint_mask", 00:07:42.373 "keyring_get_keys", 00:07:42.373 "spdk_get_version", 00:07:42.373 "rpc_get_methods" 00:07:42.373 ] 00:07:42.373 13:55:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:42.373 13:55:51 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.373 13:55:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.373 13:55:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:42.373 13:55:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60087 00:07:42.373 13:55:51 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 60087 ']' 00:07:42.373 13:55:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 60087 00:07:42.373 13:55:51 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:42.373 13:55:51 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.373 13:55:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60087 00:07:42.373 killing process with pid 60087 00:07:42.373 13:55:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.373 13:55:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.373 13:55:51 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60087' 00:07:42.373 13:55:51 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 60087 00:07:42.373 13:55:51 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 60087 00:07:42.942 ************************************ 00:07:42.942 END TEST spdkcli_tcp 00:07:42.942 ************************************ 00:07:42.942 00:07:42.942 real 0m1.616s 00:07:42.942 user 0m2.888s 00:07:42.942 sys 0m0.420s 00:07:42.943 13:55:51 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.943 13:55:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.943 13:55:52 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:42.943 13:55:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.943 13:55:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.943 13:55:52 -- common/autotest_common.sh@10 -- # set +x 00:07:42.943 ************************************ 00:07:42.943 START TEST dpdk_mem_utility 00:07:42.943 ************************************ 00:07:42.943 13:55:52 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:42.943 * Looking for test storage... 00:07:42.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:42.943 13:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:42.943 13:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60178 00:07:42.943 13:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:42.943 13:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60178 00:07:42.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.943 13:55:52 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 60178 ']' 00:07:42.943 13:55:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.943 13:55:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.943 13:55:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.943 13:55:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.943 13:55:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:42.943 [2024-07-25 13:55:52.223333] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:42.943 [2024-07-25 13:55:52.223435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60178 ] 00:07:43.202 [2024-07-25 13:55:52.356917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.202 [2024-07-25 13:55:52.457237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.202 [2024-07-25 13:55:52.500006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.777 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.777 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:43.777 13:55:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:43.777 13:55:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:43.777 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.777 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:44.068 { 00:07:44.068 "filename": "/tmp/spdk_mem_dump.txt" 00:07:44.068 } 00:07:44.068 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.068 13:55:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:44.068 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:44.068 1 heaps totaling size 814.000000 MiB 00:07:44.068 size: 814.000000 MiB heap id: 0 00:07:44.068 end heaps---------- 00:07:44.068 8 mempools totaling size 598.116089 MiB 00:07:44.068 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:44.068 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:44.068 size: 84.521057 MiB name: bdev_io_60178 00:07:44.068 size: 51.011292 MiB name: evtpool_60178 00:07:44.068 size: 50.003479 MiB name: msgpool_60178 00:07:44.068 size: 21.763794 MiB name: PDU_Pool 00:07:44.068 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:44.068 size: 0.026123 MiB name: Session_Pool 00:07:44.068 end mempools------- 00:07:44.068 6 memzones totaling size 4.142822 MiB 00:07:44.068 size: 1.000366 MiB name: RG_ring_0_60178 00:07:44.068 size: 1.000366 MiB name: RG_ring_1_60178 00:07:44.068 size: 1.000366 MiB name: RG_ring_4_60178 00:07:44.068 size: 1.000366 MiB name: RG_ring_5_60178 00:07:44.068 size: 0.125366 MiB name: RG_ring_2_60178 00:07:44.068 size: 0.015991 MiB name: RG_ring_3_60178 00:07:44.068 end memzones------- 00:07:44.068 13:55:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:44.068 heap id: 0 total size: 814.000000 MiB number of busy elements: 299 number of free elements: 15 00:07:44.068 list of free elements. size: 12.472107 MiB 00:07:44.068 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:44.068 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:44.068 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:44.068 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:44.068 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:44.068 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:44.068 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:44.068 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:44.068 element at address: 0x200000200000 with size: 0.833191 MiB 00:07:44.068 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:07:44.068 element at address: 0x20000b200000 with size: 0.489624 MiB 00:07:44.068 element at address: 0x200000800000 with size: 0.486145 MiB 00:07:44.068 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:44.068 element at address: 0x200027e00000 with size: 0.395935 MiB 00:07:44.068 element at address: 0x200003a00000 with size: 0.347839 MiB 00:07:44.068 list of standard malloc elements. size: 199.265320 MiB 00:07:44.068 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:44.068 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:44.068 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:44.068 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:44.068 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:44.068 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:44.068 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:44.068 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:44.068 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:44.068 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:44.068 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:44.068 element at address: 0x20000087c740 with size: 0.000183 MiB 00:07:44.068 element at address: 0x20000087c800 with size: 0.000183 MiB 00:07:44.068 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:07:44.068 element at address: 0x20000087c980 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59180 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59240 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59300 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59480 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59540 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59600 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59780 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59840 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59900 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:07:44.069 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:44.070 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e65680 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:44.070 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:44.070 list of memzone associated elements. size: 602.262573 MiB 00:07:44.070 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:44.070 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:44.070 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:44.070 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:44.070 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:44.070 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60178_0 00:07:44.070 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:44.070 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60178_0 00:07:44.070 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:44.070 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60178_0 00:07:44.070 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:44.070 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:44.070 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:44.070 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:44.070 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:44.070 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60178 00:07:44.070 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:44.070 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60178 00:07:44.070 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:44.070 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60178 00:07:44.070 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:44.070 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:44.070 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:44.070 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:44.070 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:44.070 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:44.070 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:44.070 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:44.070 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:44.070 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60178 00:07:44.070 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:44.070 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60178 00:07:44.070 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:44.070 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60178 00:07:44.070 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:44.070 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60178 00:07:44.070 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:44.070 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60178 00:07:44.070 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:44.070 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:44.070 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:44.070 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:44.070 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:44.070 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:44.070 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:44.070 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60178 00:07:44.070 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:44.070 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:44.070 element at address: 0x200027e65740 with size: 0.023743 MiB 00:07:44.070 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:44.070 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:44.070 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60178 00:07:44.070 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:07:44.070 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:44.070 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:07:44.070 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60178 00:07:44.070 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:44.070 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60178 00:07:44.070 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:07:44.070 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:44.070 13:55:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:44.070 13:55:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60178 00:07:44.070 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 60178 ']' 00:07:44.071 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 60178 00:07:44.071 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:44.071 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.071 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60178 00:07:44.071 killing process with pid 60178 00:07:44.071 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.071 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.071 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60178' 00:07:44.071 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 60178 00:07:44.071 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 60178 00:07:44.330 00:07:44.330 real 0m1.538s 00:07:44.330 user 0m1.586s 00:07:44.330 sys 0m0.418s 00:07:44.330 ************************************ 00:07:44.330 END TEST dpdk_mem_utility 00:07:44.330 ************************************ 00:07:44.330 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.330 13:55:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:44.330 13:55:53 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:44.330 13:55:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.330 13:55:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.330 13:55:53 -- common/autotest_common.sh@10 -- # set +x 00:07:44.590 ************************************ 00:07:44.590 START TEST event 00:07:44.590 ************************************ 00:07:44.590 13:55:53 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:44.590 * Looking for test storage... 00:07:44.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:44.590 13:55:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:44.590 13:55:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:44.590 13:55:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:44.590 13:55:53 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:44.590 13:55:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.590 13:55:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:44.590 ************************************ 00:07:44.590 START TEST event_perf 00:07:44.590 ************************************ 00:07:44.590 13:55:53 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:44.590 Running I/O for 1 seconds...[2024-07-25 13:55:53.810019] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:44.590 [2024-07-25 13:55:53.810165] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60244 ] 00:07:44.850 [2024-07-25 13:55:53.952695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.850 [2024-07-25 13:55:54.048674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.850 [2024-07-25 13:55:54.048804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.850 [2024-07-25 13:55:54.048979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.850 Running I/O for 1 seconds...[2024-07-25 13:55:54.048982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.231 00:07:46.231 lcore 0: 108864 00:07:46.231 lcore 1: 108861 00:07:46.231 lcore 2: 108861 00:07:46.231 lcore 3: 108863 00:07:46.231 done. 00:07:46.231 00:07:46.231 real 0m1.341s 00:07:46.231 user 0m4.159s 00:07:46.231 sys 0m0.058s 00:07:46.231 13:55:55 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.231 13:55:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:46.231 ************************************ 00:07:46.231 END TEST event_perf 00:07:46.231 ************************************ 00:07:46.231 13:55:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:46.231 13:55:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:46.231 13:55:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.231 13:55:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:46.231 ************************************ 00:07:46.231 START TEST event_reactor 00:07:46.231 ************************************ 00:07:46.231 13:55:55 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:46.231 [2024-07-25 13:55:55.215518] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:46.231 [2024-07-25 13:55:55.215605] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60288 ] 00:07:46.231 [2024-07-25 13:55:55.357387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.231 [2024-07-25 13:55:55.472514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.609 test_start 00:07:47.609 oneshot 00:07:47.609 tick 100 00:07:47.609 tick 100 00:07:47.609 tick 250 00:07:47.609 tick 100 00:07:47.609 tick 100 00:07:47.609 tick 250 00:07:47.609 tick 100 00:07:47.609 tick 500 00:07:47.609 tick 100 00:07:47.609 tick 100 00:07:47.609 tick 250 00:07:47.609 tick 100 00:07:47.609 tick 100 00:07:47.609 test_end 00:07:47.609 00:07:47.609 real 0m1.356s 00:07:47.609 user 0m1.192s 00:07:47.609 sys 0m0.058s 00:07:47.609 13:55:56 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.609 13:55:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:47.609 ************************************ 00:07:47.609 END TEST event_reactor 00:07:47.609 ************************************ 00:07:47.609 13:55:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:47.609 13:55:56 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:47.609 13:55:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.609 13:55:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:47.609 ************************************ 00:07:47.609 START TEST event_reactor_perf 00:07:47.609 ************************************ 00:07:47.609 13:55:56 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:47.609 [2024-07-25 13:55:56.641425] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:47.609 [2024-07-25 13:55:56.641559] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60318 ] 00:07:47.609 [2024-07-25 13:55:56.780771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.609 [2024-07-25 13:55:56.883533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.988 test_start 00:07:48.988 test_end 00:07:48.988 Performance: 447369 events per second 00:07:48.988 ************************************ 00:07:48.988 END TEST event_reactor_perf 00:07:48.988 ************************************ 00:07:48.988 00:07:48.988 real 0m1.344s 00:07:48.988 user 0m1.189s 00:07:48.988 sys 0m0.048s 00:07:48.988 13:55:57 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.988 13:55:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:48.988 13:55:58 event -- event/event.sh@49 -- # uname -s 00:07:48.988 13:55:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:48.988 13:55:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:48.988 13:55:58 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.988 13:55:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.988 13:55:58 event -- common/autotest_common.sh@10 -- # set +x 00:07:48.988 ************************************ 00:07:48.988 START TEST event_scheduler 00:07:48.988 ************************************ 00:07:48.988 13:55:58 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:48.988 * Looking for test storage... 00:07:48.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:48.988 13:55:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:48.988 13:55:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60385 00:07:48.988 13:55:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:48.988 13:55:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:48.988 13:55:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60385 00:07:48.988 13:55:58 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60385 ']' 00:07:48.988 13:55:58 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.988 13:55:58 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.988 13:55:58 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.988 13:55:58 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.988 13:55:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.988 [2024-07-25 13:55:58.187664] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:48.988 [2024-07-25 13:55:58.187826] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60385 ] 00:07:49.246 [2024-07-25 13:55:58.327383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.246 [2024-07-25 13:55:58.431208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.246 [2024-07-25 13:55:58.431567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.246 [2024-07-25 13:55:58.431383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.246 [2024-07-25 13:55:58.431568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.812 13:55:59 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.812 13:55:59 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:49.812 13:55:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:49.812 13:55:59 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.812 13:55:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:49.812 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:49.812 POWER: Cannot set governor of lcore 0 to userspace 00:07:49.812 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:49.812 POWER: Cannot set governor of lcore 0 to performance 00:07:49.812 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:49.812 POWER: Cannot set governor of lcore 0 to userspace 00:07:49.812 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:49.812 POWER: Cannot set governor of lcore 0 to userspace 00:07:49.812 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:49.812 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:49.812 POWER: Unable to set Power Management Environment for lcore 0 00:07:49.812 [2024-07-25 13:55:59.055544] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:49.812 [2024-07-25 13:55:59.055556] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:49.812 [2024-07-25 13:55:59.055562] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:49.812 [2024-07-25 13:55:59.055571] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:49.812 [2024-07-25 13:55:59.055576] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:49.812 [2024-07-25 13:55:59.055581] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:49.812 13:55:59 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.812 13:55:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:49.812 13:55:59 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.812 13:55:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:49.812 [2024-07-25 13:55:59.105014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.070 [2024-07-25 13:55:59.133327] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:50.070 13:55:59 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.070 13:55:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:50.070 13:55:59 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.070 13:55:59 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.070 13:55:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:50.070 ************************************ 00:07:50.070 START TEST scheduler_create_thread 00:07:50.070 ************************************ 00:07:50.070 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.071 2 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.071 3 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.071 4 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.071 5 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.071 6 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.071 7 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.071 8 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.071 9 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.071 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.636 10 00:07:50.636 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.636 13:55:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:50.636 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.636 13:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.075 13:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.075 13:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:52.075 13:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:52.075 13:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.075 13:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.640 13:56:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.640 13:56:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:52.640 13:56:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.641 13:56:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:53.575 13:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.575 13:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:53.575 13:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:53.575 13:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.575 13:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.189 13:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.189 00:07:54.189 real 0m4.209s 00:07:54.189 user 0m0.028s 00:07:54.189 sys 0m0.006s 00:07:54.189 13:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.189 13:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.189 ************************************ 00:07:54.189 END TEST scheduler_create_thread 00:07:54.189 ************************************ 00:07:54.189 13:56:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:54.189 13:56:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60385 00:07:54.189 13:56:03 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60385 ']' 00:07:54.189 13:56:03 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60385 00:07:54.189 13:56:03 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:54.189 13:56:03 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.189 13:56:03 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60385 00:07:54.189 killing process with pid 60385 00:07:54.189 13:56:03 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:54.189 13:56:03 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:54.189 13:56:03 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60385' 00:07:54.189 13:56:03 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60385 00:07:54.189 13:56:03 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60385 00:07:54.448 [2024-07-25 13:56:03.733491] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:54.706 00:07:54.706 real 0m5.986s 00:07:54.706 user 0m13.699s 00:07:54.706 sys 0m0.381s 00:07:54.706 13:56:04 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.966 ************************************ 00:07:54.966 END TEST event_scheduler 00:07:54.966 ************************************ 00:07:54.966 13:56:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:54.966 13:56:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:54.966 13:56:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:54.966 13:56:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.966 13:56:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.966 13:56:04 event -- common/autotest_common.sh@10 -- # set +x 00:07:54.966 ************************************ 00:07:54.966 START TEST app_repeat 00:07:54.966 ************************************ 00:07:54.966 13:56:04 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60496 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60496' 00:07:54.966 Process app_repeat pid: 60496 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:54.966 spdk_app_start Round 0 00:07:54.966 13:56:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60496 /var/tmp/spdk-nbd.sock 00:07:54.966 13:56:04 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60496 ']' 00:07:54.966 13:56:04 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:54.966 13:56:04 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.966 13:56:04 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:54.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:54.966 13:56:04 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.966 13:56:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:54.966 [2024-07-25 13:56:04.123768] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:07:54.966 [2024-07-25 13:56:04.123848] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60496 ] 00:07:54.966 [2024-07-25 13:56:04.265218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:55.225 [2024-07-25 13:56:04.373870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.225 [2024-07-25 13:56:04.373873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.225 [2024-07-25 13:56:04.433267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.792 13:56:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.792 13:56:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:55.792 13:56:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:56.051 Malloc0 00:07:56.051 13:56:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:56.309 Malloc1 00:07:56.309 13:56:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:56.309 13:56:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:56.876 /dev/nbd0 00:07:56.876 13:56:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:56.876 13:56:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:56.876 13:56:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:56.876 13:56:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:56.877 13:56:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:56.877 13:56:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:56.877 13:56:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:56.877 13:56:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:56.877 13:56:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:56.877 13:56:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:56.877 13:56:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:56.877 1+0 records in 00:07:56.877 1+0 records out 00:07:56.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436637 s, 9.4 MB/s 00:07:56.877 13:56:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:56.877 13:56:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:56.877 13:56:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:56.877 13:56:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:56.877 13:56:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:56.877 13:56:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.877 13:56:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:56.877 13:56:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:56.877 /dev/nbd1 00:07:57.135 13:56:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:57.135 13:56:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:57.135 1+0 records in 00:07:57.135 1+0 records out 00:07:57.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398135 s, 10.3 MB/s 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:57.135 13:56:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:57.135 13:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.135 13:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:57.135 13:56:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.135 13:56:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.135 13:56:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:57.394 { 00:07:57.394 "nbd_device": "/dev/nbd0", 00:07:57.394 "bdev_name": "Malloc0" 00:07:57.394 }, 00:07:57.394 { 00:07:57.394 "nbd_device": "/dev/nbd1", 00:07:57.394 "bdev_name": "Malloc1" 00:07:57.394 } 00:07:57.394 ]' 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:57.394 { 00:07:57.394 "nbd_device": "/dev/nbd0", 00:07:57.394 "bdev_name": "Malloc0" 00:07:57.394 }, 00:07:57.394 { 00:07:57.394 "nbd_device": "/dev/nbd1", 00:07:57.394 "bdev_name": "Malloc1" 00:07:57.394 } 00:07:57.394 ]' 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:57.394 /dev/nbd1' 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:57.394 /dev/nbd1' 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:57.394 256+0 records in 00:07:57.394 256+0 records out 00:07:57.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136607 s, 76.8 MB/s 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:57.394 256+0 records in 00:07:57.394 256+0 records out 00:07:57.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235644 s, 44.5 MB/s 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:57.394 256+0 records in 00:07:57.394 256+0 records out 00:07:57.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242284 s, 43.3 MB/s 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.394 13:56:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:57.654 13:56:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:57.654 13:56:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:57.654 13:56:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:57.654 13:56:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.654 13:56:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.654 13:56:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:57.654 13:56:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:57.654 13:56:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.654 13:56:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.654 13:56:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:57.912 13:56:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:57.912 13:56:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:57.912 13:56:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:57.912 13:56:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.912 13:56:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.912 13:56:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:57.912 13:56:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:57.912 13:56:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.912 13:56:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.912 13:56:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.912 13:56:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.235 13:56:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:58.235 13:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:58.235 13:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.235 13:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:58.235 13:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:58.235 13:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.235 13:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:58.235 13:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:58.235 13:56:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:58.235 13:56:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:58.235 13:56:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:58.235 13:56:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:58.235 13:56:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:58.495 13:56:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:58.754 [2024-07-25 13:56:07.911065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:58.754 [2024-07-25 13:56:08.014950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.754 [2024-07-25 13:56:08.014953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.754 [2024-07-25 13:56:08.055923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.754 [2024-07-25 13:56:08.055988] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:58.754 [2024-07-25 13:56:08.055997] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:02.037 13:56:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:02.037 spdk_app_start Round 1 00:08:02.037 13:56:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:02.037 13:56:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60496 /var/tmp/spdk-nbd.sock 00:08:02.037 13:56:10 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60496 ']' 00:08:02.037 13:56:10 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:02.037 13:56:10 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:02.037 13:56:10 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:02.037 13:56:10 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.037 13:56:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:02.037 13:56:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.037 13:56:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:02.037 13:56:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:02.037 Malloc0 00:08:02.037 13:56:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:02.322 Malloc1 00:08:02.322 13:56:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.322 13:56:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:02.322 /dev/nbd0 00:08:02.593 13:56:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:02.593 13:56:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:02.593 1+0 records in 00:08:02.593 1+0 records out 00:08:02.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201765 s, 20.3 MB/s 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:02.593 13:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.593 13:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.593 13:56:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:02.593 /dev/nbd1 00:08:02.593 13:56:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:02.593 13:56:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:02.593 1+0 records in 00:08:02.593 1+0 records out 00:08:02.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179146 s, 22.9 MB/s 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.593 13:56:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:02.593 13:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.593 13:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.594 13:56:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.594 13:56:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.594 13:56:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:02.853 { 00:08:02.853 "nbd_device": "/dev/nbd0", 00:08:02.853 "bdev_name": "Malloc0" 00:08:02.853 }, 00:08:02.853 { 00:08:02.853 "nbd_device": "/dev/nbd1", 00:08:02.853 "bdev_name": "Malloc1" 00:08:02.853 } 00:08:02.853 ]' 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:02.853 { 00:08:02.853 "nbd_device": "/dev/nbd0", 00:08:02.853 "bdev_name": "Malloc0" 00:08:02.853 }, 00:08:02.853 { 00:08:02.853 "nbd_device": "/dev/nbd1", 00:08:02.853 "bdev_name": "Malloc1" 00:08:02.853 } 00:08:02.853 ]' 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:02.853 /dev/nbd1' 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:02.853 /dev/nbd1' 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:02.853 13:56:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:03.113 256+0 records in 00:08:03.113 256+0 records out 00:08:03.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140084 s, 74.9 MB/s 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:03.113 256+0 records in 00:08:03.113 256+0 records out 00:08:03.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234658 s, 44.7 MB/s 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:03.113 256+0 records in 00:08:03.113 256+0 records out 00:08:03.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230736 s, 45.4 MB/s 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.113 13:56:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.373 13:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.373 13:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.373 13:56:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.373 13:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.373 13:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.373 13:56:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.373 13:56:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:03.373 13:56:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.373 13:56:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.373 13:56:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:03.631 13:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:03.631 13:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:03.631 13:56:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:03.631 13:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.631 13:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.631 13:56:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:03.631 13:56:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:03.631 13:56:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.631 13:56:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.631 13:56:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.631 13:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.891 13:56:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:03.891 13:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.891 13:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:03.891 13:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:03.891 13:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:03.891 13:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.891 13:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:03.891 13:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:03.891 13:56:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:03.891 13:56:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:03.891 13:56:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:03.891 13:56:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:03.891 13:56:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:04.151 13:56:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:04.444 [2024-07-25 13:56:13.477061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.444 [2024-07-25 13:56:13.581699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.444 [2024-07-25 13:56:13.581705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.444 [2024-07-25 13:56:13.626325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.444 [2024-07-25 13:56:13.626408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:04.444 [2024-07-25 13:56:13.626417] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:07.737 spdk_app_start Round 2 00:08:07.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:07.737 13:56:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:07.737 13:56:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:07.737 13:56:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60496 /var/tmp/spdk-nbd.sock 00:08:07.737 13:56:16 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60496 ']' 00:08:07.737 13:56:16 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:07.737 13:56:16 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.737 13:56:16 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:07.737 13:56:16 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.737 13:56:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:07.737 13:56:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.737 13:56:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:07.737 13:56:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:07.737 Malloc0 00:08:07.737 13:56:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:07.737 Malloc1 00:08:07.737 13:56:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:07.737 13:56:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:07.997 /dev/nbd0 00:08:07.997 13:56:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:07.997 13:56:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:07.997 1+0 records in 00:08:07.997 1+0 records out 00:08:07.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419289 s, 9.8 MB/s 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:07.997 13:56:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:07.997 13:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.997 13:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:07.997 13:56:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:08.256 /dev/nbd1 00:08:08.256 13:56:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:08.256 13:56:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:08.256 1+0 records in 00:08:08.256 1+0 records out 00:08:08.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313095 s, 13.1 MB/s 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:08.256 13:56:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:08.256 13:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.256 13:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:08.256 13:56:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.256 13:56:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.256 13:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:08.516 { 00:08:08.516 "nbd_device": "/dev/nbd0", 00:08:08.516 "bdev_name": "Malloc0" 00:08:08.516 }, 00:08:08.516 { 00:08:08.516 "nbd_device": "/dev/nbd1", 00:08:08.516 "bdev_name": "Malloc1" 00:08:08.516 } 00:08:08.516 ]' 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:08.516 { 00:08:08.516 "nbd_device": "/dev/nbd0", 00:08:08.516 "bdev_name": "Malloc0" 00:08:08.516 }, 00:08:08.516 { 00:08:08.516 "nbd_device": "/dev/nbd1", 00:08:08.516 "bdev_name": "Malloc1" 00:08:08.516 } 00:08:08.516 ]' 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:08.516 /dev/nbd1' 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:08.516 /dev/nbd1' 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:08.516 256+0 records in 00:08:08.516 256+0 records out 00:08:08.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102642 s, 102 MB/s 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:08.516 256+0 records in 00:08:08.516 256+0 records out 00:08:08.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193135 s, 54.3 MB/s 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.516 13:56:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:08.775 256+0 records in 00:08:08.775 256+0 records out 00:08:08.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210212 s, 49.9 MB/s 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.775 13:56:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:09.033 13:56:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:09.033 13:56:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:09.033 13:56:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:09.033 13:56:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.033 13:56:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.033 13:56:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:09.033 13:56:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:09.033 13:56:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.033 13:56:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.033 13:56:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:09.292 13:56:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:09.292 13:56:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:09.292 13:56:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:09.292 13:56:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.292 13:56:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.292 13:56:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:09.292 13:56:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:09.292 13:56:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.292 13:56:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:09.292 13:56:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.292 13:56:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.553 13:56:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:09.553 13:56:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:09.553 13:56:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:09.553 13:56:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:09.553 13:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.553 13:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:09.553 13:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:09.553 13:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:09.553 13:56:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:09.553 13:56:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:09.553 13:56:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:09.553 13:56:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:09.554 13:56:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:09.812 13:56:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:09.812 [2024-07-25 13:56:19.050998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.069 [2024-07-25 13:56:19.148446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.069 [2024-07-25 13:56:19.148447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.069 [2024-07-25 13:56:19.190463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.069 [2024-07-25 13:56:19.190522] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:10.069 [2024-07-25 13:56:19.190530] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:12.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:12.605 13:56:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60496 /var/tmp/spdk-nbd.sock 00:08:12.605 13:56:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60496 ']' 00:08:12.605 13:56:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:12.605 13:56:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.605 13:56:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:12.605 13:56:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.605 13:56:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:12.865 13:56:22 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.865 13:56:22 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:12.865 13:56:22 event.app_repeat -- event/event.sh@39 -- # killprocess 60496 00:08:12.865 13:56:22 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60496 ']' 00:08:12.865 13:56:22 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60496 00:08:12.865 13:56:22 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:12.865 13:56:22 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.865 13:56:22 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60496 00:08:12.865 killing process with pid 60496 00:08:12.865 13:56:22 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.865 13:56:22 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.865 13:56:22 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60496' 00:08:12.865 13:56:22 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60496 00:08:12.865 13:56:22 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60496 00:08:13.125 spdk_app_start is called in Round 0. 00:08:13.125 Shutdown signal received, stop current app iteration 00:08:13.125 Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 reinitialization... 00:08:13.125 spdk_app_start is called in Round 1. 00:08:13.125 Shutdown signal received, stop current app iteration 00:08:13.125 Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 reinitialization... 00:08:13.125 spdk_app_start is called in Round 2. 00:08:13.125 Shutdown signal received, stop current app iteration 00:08:13.125 Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 reinitialization... 00:08:13.125 spdk_app_start is called in Round 3. 00:08:13.125 Shutdown signal received, stop current app iteration 00:08:13.125 ************************************ 00:08:13.125 END TEST app_repeat 00:08:13.125 ************************************ 00:08:13.125 13:56:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:13.125 13:56:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:13.125 00:08:13.125 real 0m18.236s 00:08:13.125 user 0m40.332s 00:08:13.125 sys 0m2.898s 00:08:13.125 13:56:22 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.125 13:56:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:13.125 13:56:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:13.125 13:56:22 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:13.125 13:56:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.125 13:56:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.125 13:56:22 event -- common/autotest_common.sh@10 -- # set +x 00:08:13.125 ************************************ 00:08:13.125 START TEST cpu_locks 00:08:13.125 ************************************ 00:08:13.125 13:56:22 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:13.384 * Looking for test storage... 00:08:13.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:13.384 13:56:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:13.384 13:56:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:13.384 13:56:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:13.384 13:56:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:13.384 13:56:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.384 13:56:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.384 13:56:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:13.384 ************************************ 00:08:13.384 START TEST default_locks 00:08:13.384 ************************************ 00:08:13.384 13:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:13.384 13:56:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60917 00:08:13.384 13:56:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60917 00:08:13.384 13:56:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:13.384 13:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60917 ']' 00:08:13.384 13:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.384 13:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.384 13:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.384 13:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.384 13:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:13.384 [2024-07-25 13:56:22.586062] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:13.384 [2024-07-25 13:56:22.586238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60917 ] 00:08:13.643 [2024-07-25 13:56:22.722898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.643 [2024-07-25 13:56:22.840475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.643 [2024-07-25 13:56:22.890615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.211 13:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.211 13:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:08:14.211 13:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60917 00:08:14.211 13:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60917 00:08:14.211 13:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:14.778 13:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60917 00:08:14.778 13:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60917 ']' 00:08:14.778 13:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60917 00:08:14.778 13:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:08:14.778 13:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.778 13:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60917 00:08:14.778 13:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.778 killing process with pid 60917 00:08:14.778 13:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.778 13:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60917' 00:08:14.778 13:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60917 00:08:14.778 13:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60917 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60917 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60917 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60917 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60917 ']' 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:15.037 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60917) - No such process 00:08:15.037 ERROR: process (pid: 60917) is no longer running 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:15.037 ************************************ 00:08:15.037 END TEST default_locks 00:08:15.037 ************************************ 00:08:15.037 00:08:15.037 real 0m1.654s 00:08:15.037 user 0m1.729s 00:08:15.037 sys 0m0.486s 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.037 13:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:15.037 13:56:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:15.037 13:56:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:15.037 13:56:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.037 13:56:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:15.037 ************************************ 00:08:15.037 START TEST default_locks_via_rpc 00:08:15.037 ************************************ 00:08:15.037 13:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:15.037 13:56:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60964 00:08:15.037 13:56:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:15.037 13:56:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60964 00:08:15.037 13:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60964 ']' 00:08:15.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.037 13:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.037 13:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.037 13:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.037 13:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.037 13:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.037 [2024-07-25 13:56:24.313140] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:15.037 [2024-07-25 13:56:24.313228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60964 ] 00:08:15.296 [2024-07-25 13:56:24.449189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.297 [2024-07-25 13:56:24.552969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.297 [2024-07-25 13:56:24.596536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:15.864 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.864 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:15.865 13:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:15.865 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.865 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.865 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.865 13:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:15.865 13:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:15.865 13:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:15.865 13:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:15.865 13:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:15.865 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.865 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.228 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.228 13:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60964 00:08:16.228 13:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60964 00:08:16.228 13:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:16.506 13:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60964 00:08:16.506 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60964 ']' 00:08:16.506 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60964 00:08:16.506 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:16.506 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.506 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60964 00:08:16.506 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.506 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.506 killing process with pid 60964 00:08:16.506 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60964' 00:08:16.506 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60964 00:08:16.506 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60964 00:08:16.765 00:08:16.765 real 0m1.709s 00:08:16.765 user 0m1.729s 00:08:16.765 sys 0m0.545s 00:08:16.765 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.765 13:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 ************************************ 00:08:16.765 END TEST default_locks_via_rpc 00:08:16.765 ************************************ 00:08:16.765 13:56:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:16.765 13:56:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.765 13:56:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.765 13:56:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 ************************************ 00:08:16.765 START TEST non_locking_app_on_locked_coremask 00:08:16.765 ************************************ 00:08:16.765 13:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:16.765 13:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61015 00:08:16.765 13:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:16.765 13:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61015 /var/tmp/spdk.sock 00:08:16.765 13:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61015 ']' 00:08:16.765 13:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.765 13:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.765 13:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.765 13:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.765 13:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.024 [2024-07-25 13:56:26.075393] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:17.024 [2024-07-25 13:56:26.075472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61015 ] 00:08:17.024 [2024-07-25 13:56:26.215535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.024 [2024-07-25 13:56:26.316616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.283 [2024-07-25 13:56:26.357506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:17.852 13:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.852 13:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:17.852 13:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61031 00:08:17.852 13:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61031 /var/tmp/spdk2.sock 00:08:17.852 13:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:17.852 13:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61031 ']' 00:08:17.852 13:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:17.852 13:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:17.852 13:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:17.852 13:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.852 13:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.852 [2024-07-25 13:56:27.066809] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:17.852 [2024-07-25 13:56:27.066895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61031 ] 00:08:18.111 [2024-07-25 13:56:27.201266] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:18.111 [2024-07-25 13:56:27.201327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.371 [2024-07-25 13:56:27.422415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.371 [2024-07-25 13:56:27.506262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.940 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.940 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:18.940 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61015 00:08:18.940 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:18.940 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61015 00:08:19.510 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61015 00:08:19.510 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61015 ']' 00:08:19.510 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61015 00:08:19.510 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:19.510 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.510 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61015 00:08:19.510 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.510 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.510 killing process with pid 61015 00:08:19.510 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61015' 00:08:19.510 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61015 00:08:19.510 13:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61015 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61031 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61031 ']' 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61031 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61031 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.449 killing process with pid 61031 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61031' 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61031 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61031 00:08:20.449 00:08:20.449 real 0m3.734s 00:08:20.449 user 0m4.160s 00:08:20.449 sys 0m1.051s 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.449 13:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.449 ************************************ 00:08:20.449 END TEST non_locking_app_on_locked_coremask 00:08:20.449 ************************************ 00:08:20.708 13:56:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:20.708 13:56:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.708 13:56:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.708 13:56:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.708 ************************************ 00:08:20.708 START TEST locking_app_on_unlocked_coremask 00:08:20.708 ************************************ 00:08:20.708 13:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:20.708 13:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61098 00:08:20.708 13:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:20.708 13:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61098 /var/tmp/spdk.sock 00:08:20.708 13:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61098 ']' 00:08:20.708 13:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.708 13:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.708 13:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.708 13:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.708 13:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.708 [2024-07-25 13:56:29.879919] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:20.708 [2024-07-25 13:56:29.880074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61098 ] 00:08:20.968 [2024-07-25 13:56:30.017653] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:20.968 [2024-07-25 13:56:30.017855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.968 [2024-07-25 13:56:30.120632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.968 [2024-07-25 13:56:30.162533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:21.536 13:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.536 13:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:21.536 13:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:21.536 13:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61113 00:08:21.536 13:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61113 /var/tmp/spdk2.sock 00:08:21.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:21.536 13:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61113 ']' 00:08:21.536 13:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:21.536 13:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.536 13:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:21.536 13:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.536 13:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:21.536 [2024-07-25 13:56:30.819710] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:21.536 [2024-07-25 13:56:30.819848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61113 ] 00:08:21.795 [2024-07-25 13:56:30.953384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.053 [2024-07-25 13:56:31.168840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.053 [2024-07-25 13:56:31.254682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.620 13:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.620 13:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:22.620 13:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61113 00:08:22.620 13:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:22.620 13:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61113 00:08:23.187 13:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61098 00:08:23.187 13:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61098 ']' 00:08:23.187 13:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 61098 00:08:23.187 13:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:23.187 13:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:23.187 13:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61098 00:08:23.187 killing process with pid 61098 00:08:23.187 13:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:23.187 13:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:23.187 13:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61098' 00:08:23.187 13:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 61098 00:08:23.187 13:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 61098 00:08:24.121 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61113 00:08:24.121 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61113 ']' 00:08:24.121 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 61113 00:08:24.121 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:24.121 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.121 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61113 00:08:24.121 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.121 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.121 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61113' 00:08:24.121 killing process with pid 61113 00:08:24.121 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 61113 00:08:24.121 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 61113 00:08:24.378 ************************************ 00:08:24.378 END TEST locking_app_on_unlocked_coremask 00:08:24.378 ************************************ 00:08:24.378 00:08:24.378 real 0m3.622s 00:08:24.378 user 0m3.951s 00:08:24.378 sys 0m0.927s 00:08:24.378 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.378 13:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.378 13:56:33 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:24.378 13:56:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.378 13:56:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.378 13:56:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:24.378 ************************************ 00:08:24.378 START TEST locking_app_on_locked_coremask 00:08:24.378 ************************************ 00:08:24.378 13:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:24.378 13:56:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61170 00:08:24.378 13:56:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61170 /var/tmp/spdk.sock 00:08:24.378 13:56:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:24.378 13:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61170 ']' 00:08:24.378 13:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.378 13:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.378 13:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.378 13:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.378 13:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.378 [2024-07-25 13:56:33.543927] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:24.378 [2024-07-25 13:56:33.544009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61170 ] 00:08:24.378 [2024-07-25 13:56:33.673318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.636 [2024-07-25 13:56:33.783702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.636 [2024-07-25 13:56:33.828114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61186 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61186 /var/tmp/spdk2.sock 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61186 /var/tmp/spdk2.sock 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:25.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61186 /var/tmp/spdk2.sock 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61186 ']' 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.569 13:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.569 [2024-07-25 13:56:34.639225] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:25.569 [2024-07-25 13:56:34.639316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61186 ] 00:08:25.569 [2024-07-25 13:56:34.770261] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61170 has claimed it. 00:08:25.569 [2024-07-25 13:56:34.770338] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:26.137 ERROR: process (pid: 61186) is no longer running 00:08:26.137 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61186) - No such process 00:08:26.137 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.137 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:26.137 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:26.137 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.137 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:26.137 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.137 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61170 00:08:26.137 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61170 00:08:26.137 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:26.396 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61170 00:08:26.396 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61170 ']' 00:08:26.396 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61170 00:08:26.396 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:26.396 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.396 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61170 00:08:26.662 killing process with pid 61170 00:08:26.662 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.662 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.662 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61170' 00:08:26.662 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61170 00:08:26.662 13:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61170 00:08:26.920 ************************************ 00:08:26.920 END TEST locking_app_on_locked_coremask 00:08:26.920 ************************************ 00:08:26.920 00:08:26.920 real 0m2.546s 00:08:26.920 user 0m2.937s 00:08:26.920 sys 0m0.620s 00:08:26.920 13:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.920 13:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:26.920 13:56:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:26.920 13:56:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.920 13:56:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.920 13:56:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:26.920 ************************************ 00:08:26.920 START TEST locking_overlapped_coremask 00:08:26.920 ************************************ 00:08:26.920 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:26.920 13:56:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61237 00:08:26.920 13:56:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:26.920 13:56:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61237 /var/tmp/spdk.sock 00:08:26.920 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61237 ']' 00:08:26.920 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.920 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.920 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.920 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.920 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:26.920 [2024-07-25 13:56:36.114505] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:26.920 [2024-07-25 13:56:36.114578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61237 ] 00:08:27.179 [2024-07-25 13:56:36.258998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:27.179 [2024-07-25 13:56:36.347073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.179 [2024-07-25 13:56:36.347262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.179 [2024-07-25 13:56:36.347265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.179 [2024-07-25 13:56:36.388494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61250 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61250 /var/tmp/spdk2.sock 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61250 /var/tmp/spdk2.sock 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:27.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61250 /var/tmp/spdk2.sock 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61250 ']' 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.749 13:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.749 [2024-07-25 13:56:37.015145] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:27.749 [2024-07-25 13:56:37.015215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61250 ] 00:08:28.007 [2024-07-25 13:56:37.149683] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61237 has claimed it. 00:08:28.007 [2024-07-25 13:56:37.149749] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:28.575 ERROR: process (pid: 61250) is no longer running 00:08:28.575 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61250) - No such process 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61237 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 61237 ']' 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 61237 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61237 00:08:28.575 killing process with pid 61237 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61237' 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 61237 00:08:28.575 13:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 61237 00:08:28.833 ************************************ 00:08:28.833 00:08:28.833 real 0m1.979s 00:08:28.833 user 0m5.405s 00:08:28.833 sys 0m0.341s 00:08:28.833 13:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.833 13:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:28.833 END TEST locking_overlapped_coremask 00:08:28.833 ************************************ 00:08:28.833 13:56:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:28.833 13:56:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.833 13:56:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.833 13:56:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:28.833 ************************************ 00:08:28.833 START TEST locking_overlapped_coremask_via_rpc 00:08:28.833 ************************************ 00:08:28.833 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:28.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.833 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61290 00:08:28.833 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61290 /var/tmp/spdk.sock 00:08:28.834 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:28.834 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61290 ']' 00:08:28.834 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.834 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.834 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.834 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.834 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.092 [2024-07-25 13:56:38.149155] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:29.092 [2024-07-25 13:56:38.149669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61290 ] 00:08:29.092 [2024-07-25 13:56:38.295270] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:29.092 [2024-07-25 13:56:38.295340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:29.092 [2024-07-25 13:56:38.388250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.092 [2024-07-25 13:56:38.388410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.092 [2024-07-25 13:56:38.388411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.350 [2024-07-25 13:56:38.430465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:29.918 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.918 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:29.918 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61308 00:08:29.918 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61308 /var/tmp/spdk2.sock 00:08:29.918 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:29.918 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61308 ']' 00:08:29.918 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:29.918 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.918 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:29.918 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.918 13:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.918 [2024-07-25 13:56:39.051115] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:29.918 [2024-07-25 13:56:39.051583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61308 ] 00:08:29.918 [2024-07-25 13:56:39.188201] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:29.918 [2024-07-25 13:56:39.188235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:30.177 [2024-07-25 13:56:39.392398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.177 [2024-07-25 13:56:39.392440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:30.177 [2024-07-25 13:56:39.392441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.177 [2024-07-25 13:56:39.476088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.746 [2024-07-25 13:56:39.987407] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61290 has claimed it. 00:08:30.746 request: 00:08:30.746 { 00:08:30.746 "method": "framework_enable_cpumask_locks", 00:08:30.746 "req_id": 1 00:08:30.746 } 00:08:30.746 Got JSON-RPC error response 00:08:30.746 response: 00:08:30.746 { 00:08:30.746 "code": -32603, 00:08:30.746 "message": "Failed to claim CPU core: 2" 00:08:30.746 } 00:08:30.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61290 /var/tmp/spdk.sock 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61290 ']' 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.746 13:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:31.005 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.005 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:31.005 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61308 /var/tmp/spdk2.sock 00:08:31.005 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61308 ']' 00:08:31.005 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:31.005 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.005 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:31.005 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.005 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.265 ************************************ 00:08:31.265 END TEST locking_overlapped_coremask_via_rpc 00:08:31.265 ************************************ 00:08:31.265 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.265 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:31.265 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:31.265 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:31.265 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:31.265 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:31.266 00:08:31.266 real 0m2.333s 00:08:31.266 user 0m1.081s 00:08:31.266 sys 0m0.176s 00:08:31.266 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.266 13:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.266 13:56:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:31.266 13:56:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61290 ]] 00:08:31.266 13:56:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61290 00:08:31.266 13:56:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61290 ']' 00:08:31.266 13:56:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61290 00:08:31.266 13:56:40 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:31.266 13:56:40 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.266 13:56:40 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61290 00:08:31.266 killing process with pid 61290 00:08:31.266 13:56:40 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:31.266 13:56:40 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:31.266 13:56:40 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61290' 00:08:31.266 13:56:40 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61290 00:08:31.266 13:56:40 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61290 00:08:31.526 13:56:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61308 ]] 00:08:31.526 13:56:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61308 00:08:31.526 13:56:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61308 ']' 00:08:31.526 13:56:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61308 00:08:31.526 13:56:40 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:31.526 13:56:40 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.526 13:56:40 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61308 00:08:31.786 killing process with pid 61308 00:08:31.786 13:56:40 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:31.786 13:56:40 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:31.786 13:56:40 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61308' 00:08:31.786 13:56:40 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61308 00:08:31.786 13:56:40 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61308 00:08:32.047 13:56:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:32.047 13:56:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:32.048 13:56:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61290 ]] 00:08:32.048 13:56:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61290 00:08:32.048 13:56:41 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61290 ']' 00:08:32.048 13:56:41 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61290 00:08:32.048 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61290) - No such process 00:08:32.048 Process with pid 61290 is not found 00:08:32.048 Process with pid 61308 is not found 00:08:32.048 13:56:41 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61290 is not found' 00:08:32.048 13:56:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61308 ]] 00:08:32.048 13:56:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61308 00:08:32.048 13:56:41 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61308 ']' 00:08:32.048 13:56:41 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61308 00:08:32.048 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61308) - No such process 00:08:32.048 13:56:41 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61308 is not found' 00:08:32.048 13:56:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:32.048 00:08:32.048 real 0m18.816s 00:08:32.048 user 0m31.934s 00:08:32.048 sys 0m4.971s 00:08:32.048 13:56:41 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.048 13:56:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:32.048 ************************************ 00:08:32.048 END TEST cpu_locks 00:08:32.048 ************************************ 00:08:32.048 00:08:32.048 real 0m47.611s 00:08:32.048 user 1m32.693s 00:08:32.048 sys 0m8.765s 00:08:32.048 13:56:41 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.048 13:56:41 event -- common/autotest_common.sh@10 -- # set +x 00:08:32.048 ************************************ 00:08:32.048 END TEST event 00:08:32.048 ************************************ 00:08:32.048 13:56:41 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:32.048 13:56:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:32.048 13:56:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.048 13:56:41 -- common/autotest_common.sh@10 -- # set +x 00:08:32.048 ************************************ 00:08:32.048 START TEST thread 00:08:32.048 ************************************ 00:08:32.048 13:56:41 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:32.308 * Looking for test storage... 00:08:32.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:32.308 13:56:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:32.308 13:56:41 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:32.308 13:56:41 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.308 13:56:41 thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.308 ************************************ 00:08:32.308 START TEST thread_poller_perf 00:08:32.308 ************************************ 00:08:32.308 13:56:41 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:32.308 [2024-07-25 13:56:41.437722] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:32.308 [2024-07-25 13:56:41.437807] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61431 ] 00:08:32.308 [2024-07-25 13:56:41.575868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.567 [2024-07-25 13:56:41.669501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.567 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:33.504 ====================================== 00:08:33.504 busy:2301255010 (cyc) 00:08:33.504 total_run_count: 370000 00:08:33.504 tsc_hz: 2290000000 (cyc) 00:08:33.504 ====================================== 00:08:33.504 poller_cost: 6219 (cyc), 2715 (nsec) 00:08:33.504 00:08:33.504 real 0m1.332s 00:08:33.504 user 0m1.175s 00:08:33.504 sys 0m0.050s 00:08:33.504 13:56:42 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.504 13:56:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:33.504 ************************************ 00:08:33.504 END TEST thread_poller_perf 00:08:33.504 ************************************ 00:08:33.504 13:56:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:33.504 13:56:42 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:33.504 13:56:42 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.504 13:56:42 thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.762 ************************************ 00:08:33.762 START TEST thread_poller_perf 00:08:33.762 ************************************ 00:08:33.762 13:56:42 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:33.762 [2024-07-25 13:56:42.840070] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:33.762 [2024-07-25 13:56:42.840262] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61461 ] 00:08:33.762 [2024-07-25 13:56:42.982255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.021 [2024-07-25 13:56:43.084336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.021 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:34.960 ====================================== 00:08:34.960 busy:2291962800 (cyc) 00:08:34.960 total_run_count: 4861000 00:08:34.960 tsc_hz: 2290000000 (cyc) 00:08:34.960 ====================================== 00:08:34.960 poller_cost: 471 (cyc), 205 (nsec) 00:08:34.960 00:08:34.960 real 0m1.347s 00:08:34.960 user 0m1.190s 00:08:34.960 sys 0m0.051s 00:08:34.960 13:56:44 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.960 13:56:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:34.960 ************************************ 00:08:34.960 END TEST thread_poller_perf 00:08:34.960 ************************************ 00:08:34.960 13:56:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:34.960 00:08:34.960 real 0m2.893s 00:08:34.960 user 0m2.439s 00:08:34.960 sys 0m0.248s 00:08:34.960 13:56:44 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.960 13:56:44 thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.960 ************************************ 00:08:34.960 END TEST thread 00:08:34.960 ************************************ 00:08:34.960 13:56:44 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:08:34.960 13:56:44 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:34.960 13:56:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.960 13:56:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.960 13:56:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.219 ************************************ 00:08:35.219 START TEST app_cmdline 00:08:35.219 ************************************ 00:08:35.219 13:56:44 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:35.219 * Looking for test storage... 00:08:35.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:35.219 13:56:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:35.219 13:56:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61536 00:08:35.219 13:56:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:35.219 13:56:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61536 00:08:35.219 13:56:44 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61536 ']' 00:08:35.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.219 13:56:44 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.219 13:56:44 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.219 13:56:44 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.219 13:56:44 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.219 13:56:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:35.219 [2024-07-25 13:56:44.439275] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:35.219 [2024-07-25 13:56:44.439390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61536 ] 00:08:35.479 [2024-07-25 13:56:44.577371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.479 [2024-07-25 13:56:44.681594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.479 [2024-07-25 13:56:44.724350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:36.054 13:56:45 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.054 13:56:45 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:36.054 13:56:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:36.313 { 00:08:36.313 "version": "SPDK v24.09-pre git sha1 208b98e37", 00:08:36.313 "fields": { 00:08:36.313 "major": 24, 00:08:36.313 "minor": 9, 00:08:36.313 "patch": 0, 00:08:36.313 "suffix": "-pre", 00:08:36.313 "commit": "208b98e37" 00:08:36.313 } 00:08:36.313 } 00:08:36.313 13:56:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:36.313 13:56:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:36.313 13:56:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:36.313 13:56:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:36.313 13:56:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:36.313 13:56:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:36.313 13:56:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.313 13:56:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:36.313 13:56:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:36.313 13:56:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:36.313 13:56:45 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:36.572 request: 00:08:36.572 { 00:08:36.572 "method": "env_dpdk_get_mem_stats", 00:08:36.572 "req_id": 1 00:08:36.572 } 00:08:36.572 Got JSON-RPC error response 00:08:36.572 response: 00:08:36.572 { 00:08:36.572 "code": -32601, 00:08:36.572 "message": "Method not found" 00:08:36.572 } 00:08:36.572 13:56:45 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.573 13:56:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61536 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61536 ']' 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61536 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61536 00:08:36.573 killing process with pid 61536 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61536' 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@969 -- # kill 61536 00:08:36.573 13:56:45 app_cmdline -- common/autotest_common.sh@974 -- # wait 61536 00:08:37.142 00:08:37.142 real 0m1.896s 00:08:37.142 user 0m2.278s 00:08:37.142 sys 0m0.437s 00:08:37.142 13:56:46 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.142 13:56:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:37.142 ************************************ 00:08:37.142 END TEST app_cmdline 00:08:37.142 ************************************ 00:08:37.142 13:56:46 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:37.142 13:56:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.142 13:56:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.142 13:56:46 -- common/autotest_common.sh@10 -- # set +x 00:08:37.142 ************************************ 00:08:37.142 START TEST version 00:08:37.142 ************************************ 00:08:37.142 13:56:46 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:37.142 * Looking for test storage... 00:08:37.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:37.142 13:56:46 version -- app/version.sh@17 -- # get_header_version major 00:08:37.142 13:56:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:37.142 13:56:46 version -- app/version.sh@14 -- # cut -f2 00:08:37.142 13:56:46 version -- app/version.sh@14 -- # tr -d '"' 00:08:37.142 13:56:46 version -- app/version.sh@17 -- # major=24 00:08:37.142 13:56:46 version -- app/version.sh@18 -- # get_header_version minor 00:08:37.142 13:56:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:37.142 13:56:46 version -- app/version.sh@14 -- # cut -f2 00:08:37.142 13:56:46 version -- app/version.sh@14 -- # tr -d '"' 00:08:37.142 13:56:46 version -- app/version.sh@18 -- # minor=9 00:08:37.142 13:56:46 version -- app/version.sh@19 -- # get_header_version patch 00:08:37.142 13:56:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:37.142 13:56:46 version -- app/version.sh@14 -- # cut -f2 00:08:37.142 13:56:46 version -- app/version.sh@14 -- # tr -d '"' 00:08:37.142 13:56:46 version -- app/version.sh@19 -- # patch=0 00:08:37.142 13:56:46 version -- app/version.sh@20 -- # get_header_version suffix 00:08:37.142 13:56:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:37.142 13:56:46 version -- app/version.sh@14 -- # cut -f2 00:08:37.142 13:56:46 version -- app/version.sh@14 -- # tr -d '"' 00:08:37.142 13:56:46 version -- app/version.sh@20 -- # suffix=-pre 00:08:37.142 13:56:46 version -- app/version.sh@22 -- # version=24.9 00:08:37.142 13:56:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:37.142 13:56:46 version -- app/version.sh@28 -- # version=24.9rc0 00:08:37.142 13:56:46 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:37.142 13:56:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:37.142 13:56:46 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:37.142 13:56:46 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:37.401 ************************************ 00:08:37.401 END TEST version 00:08:37.401 ************************************ 00:08:37.401 00:08:37.401 real 0m0.221s 00:08:37.401 user 0m0.121s 00:08:37.401 sys 0m0.151s 00:08:37.401 13:56:46 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.401 13:56:46 version -- common/autotest_common.sh@10 -- # set +x 00:08:37.401 13:56:46 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:08:37.401 13:56:46 -- spdk/autotest.sh@202 -- # uname -s 00:08:37.401 13:56:46 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:08:37.401 13:56:46 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:37.401 13:56:46 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:08:37.401 13:56:46 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:08:37.401 13:56:46 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:37.401 13:56:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.401 13:56:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.401 13:56:46 -- common/autotest_common.sh@10 -- # set +x 00:08:37.401 ************************************ 00:08:37.401 START TEST spdk_dd 00:08:37.401 ************************************ 00:08:37.401 13:56:46 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:37.401 * Looking for test storage... 00:08:37.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:37.401 13:56:46 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.401 13:56:46 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.401 13:56:46 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.401 13:56:46 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.401 13:56:46 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.401 13:56:46 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.401 13:56:46 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.401 13:56:46 spdk_dd -- paths/export.sh@5 -- # export PATH 00:08:37.401 13:56:46 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.401 13:56:46 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:37.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:37.971 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:37.971 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:37.971 13:56:47 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:37.971 13:56:47 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@230 -- # local class 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@232 -- # local progif 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@233 -- # class=01 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@15 -- # local i 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@24 -- # return 0 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@15 -- # local i 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@24 -- # return 0 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:08:37.971 13:56:47 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:37.972 13:56:47 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@139 -- # local lib 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:08:37.972 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.232 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:08:38.232 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.232 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:38.232 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:38.233 * spdk_dd linked to liburing 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:38.233 13:56:47 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:08:38.233 13:56:47 spdk_dd -- dd/common.sh@153 -- # return 0 00:08:38.233 13:56:47 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:38.233 13:56:47 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:38.233 13:56:47 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:38.233 13:56:47 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.233 13:56:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:38.233 ************************************ 00:08:38.233 START TEST spdk_dd_basic_rw 00:08:38.233 ************************************ 00:08:38.233 13:56:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:38.233 * Looking for test storage... 00:08:38.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:38.233 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.233 13:56:47 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.233 13:56:47 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:08:38.234 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:38.496 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:38.496 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:38.497 ************************************ 00:08:38.497 START TEST dd_bs_lt_native_bs 00:08:38.497 ************************************ 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:38.497 13:56:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:38.497 { 00:08:38.497 "subsystems": [ 00:08:38.497 { 00:08:38.497 "subsystem": "bdev", 00:08:38.497 "config": [ 00:08:38.497 { 00:08:38.497 "params": { 00:08:38.497 "trtype": "pcie", 00:08:38.497 "traddr": "0000:00:10.0", 00:08:38.497 "name": "Nvme0" 00:08:38.497 }, 00:08:38.497 "method": "bdev_nvme_attach_controller" 00:08:38.497 }, 00:08:38.497 { 00:08:38.497 "method": "bdev_wait_for_examine" 00:08:38.497 } 00:08:38.497 ] 00:08:38.497 } 00:08:38.497 ] 00:08:38.497 } 00:08:38.497 [2024-07-25 13:56:47.708725] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:38.497 [2024-07-25 13:56:47.708793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61858 ] 00:08:38.757 [2024-07-25 13:56:47.845017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.757 [2024-07-25 13:56:47.944116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.757 [2024-07-25 13:56:47.985334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:39.016 [2024-07-25 13:56:48.083957] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:39.016 [2024-07-25 13:56:48.084015] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.016 [2024-07-25 13:56:48.183726] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:39.016 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:08:39.017 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:39.017 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:08:39.017 ************************************ 00:08:39.017 END TEST dd_bs_lt_native_bs 00:08:39.017 ************************************ 00:08:39.017 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:08:39.017 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:08:39.017 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:39.017 00:08:39.017 real 0m0.625s 00:08:39.017 user 0m0.438s 00:08:39.017 sys 0m0.141s 00:08:39.017 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.017 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:39.325 ************************************ 00:08:39.325 START TEST dd_rw 00:08:39.325 ************************************ 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:39.325 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:39.584 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:39.584 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:39.584 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:39.584 13:56:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:39.584 [2024-07-25 13:56:48.813193] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:39.584 [2024-07-25 13:56:48.813331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61893 ] 00:08:39.584 { 00:08:39.584 "subsystems": [ 00:08:39.584 { 00:08:39.584 "subsystem": "bdev", 00:08:39.584 "config": [ 00:08:39.584 { 00:08:39.584 "params": { 00:08:39.584 "trtype": "pcie", 00:08:39.584 "traddr": "0000:00:10.0", 00:08:39.584 "name": "Nvme0" 00:08:39.584 }, 00:08:39.584 "method": "bdev_nvme_attach_controller" 00:08:39.584 }, 00:08:39.584 { 00:08:39.584 "method": "bdev_wait_for_examine" 00:08:39.584 } 00:08:39.584 ] 00:08:39.584 } 00:08:39.584 ] 00:08:39.584 } 00:08:39.842 [2024-07-25 13:56:48.953549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.842 [2024-07-25 13:56:49.049352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.842 [2024-07-25 13:56:49.089844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:40.101  Copying: 60/60 [kB] (average 29 MBps) 00:08:40.101 00:08:40.101 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:40.101 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:40.101 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:40.101 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:40.361 { 00:08:40.361 "subsystems": [ 00:08:40.361 { 00:08:40.361 "subsystem": "bdev", 00:08:40.361 "config": [ 00:08:40.361 { 00:08:40.361 "params": { 00:08:40.361 "trtype": "pcie", 00:08:40.361 "traddr": "0000:00:10.0", 00:08:40.361 "name": "Nvme0" 00:08:40.361 }, 00:08:40.361 "method": "bdev_nvme_attach_controller" 00:08:40.361 }, 00:08:40.361 { 00:08:40.361 "method": "bdev_wait_for_examine" 00:08:40.361 } 00:08:40.361 ] 00:08:40.361 } 00:08:40.361 ] 00:08:40.361 } 00:08:40.361 [2024-07-25 13:56:49.429257] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:40.361 [2024-07-25 13:56:49.429389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61908 ] 00:08:40.361 [2024-07-25 13:56:49.567031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.361 [2024-07-25 13:56:49.656233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.622 [2024-07-25 13:56:49.696770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:40.881  Copying: 60/60 [kB] (average 19 MBps) 00:08:40.881 00:08:40.881 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:40.881 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:40.881 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:40.881 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:40.881 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:40.881 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:40.881 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:40.882 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:40.882 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:40.882 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:40.882 13:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:40.882 { 00:08:40.882 "subsystems": [ 00:08:40.882 { 00:08:40.882 "subsystem": "bdev", 00:08:40.882 "config": [ 00:08:40.882 { 00:08:40.882 "params": { 00:08:40.882 "trtype": "pcie", 00:08:40.882 "traddr": "0000:00:10.0", 00:08:40.882 "name": "Nvme0" 00:08:40.882 }, 00:08:40.882 "method": "bdev_nvme_attach_controller" 00:08:40.882 }, 00:08:40.882 { 00:08:40.882 "method": "bdev_wait_for_examine" 00:08:40.882 } 00:08:40.882 ] 00:08:40.882 } 00:08:40.882 ] 00:08:40.882 } 00:08:40.882 [2024-07-25 13:56:50.042866] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:40.882 [2024-07-25 13:56:50.043029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61923 ] 00:08:40.882 [2024-07-25 13:56:50.179853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.142 [2024-07-25 13:56:50.282643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.142 [2024-07-25 13:56:50.325560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:41.405  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:41.405 00:08:41.405 13:56:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:41.405 13:56:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:41.405 13:56:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:41.405 13:56:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:41.405 13:56:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:41.405 13:56:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:41.405 13:56:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:41.987 13:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:41.987 13:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:41.988 13:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:41.988 13:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:41.988 [2024-07-25 13:56:51.079902] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:41.988 [2024-07-25 13:56:51.079979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61942 ] 00:08:41.988 { 00:08:41.988 "subsystems": [ 00:08:41.988 { 00:08:41.988 "subsystem": "bdev", 00:08:41.988 "config": [ 00:08:41.988 { 00:08:41.988 "params": { 00:08:41.988 "trtype": "pcie", 00:08:41.988 "traddr": "0000:00:10.0", 00:08:41.988 "name": "Nvme0" 00:08:41.988 }, 00:08:41.988 "method": "bdev_nvme_attach_controller" 00:08:41.988 }, 00:08:41.988 { 00:08:41.988 "method": "bdev_wait_for_examine" 00:08:41.988 } 00:08:41.988 ] 00:08:41.988 } 00:08:41.988 ] 00:08:41.988 } 00:08:41.988 [2024-07-25 13:56:51.218173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.258 [2024-07-25 13:56:51.319918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.258 [2024-07-25 13:56:51.360999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:42.531  Copying: 60/60 [kB] (average 58 MBps) 00:08:42.531 00:08:42.531 13:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:42.531 13:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:42.531 13:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:42.531 13:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:42.531 [2024-07-25 13:56:51.697813] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:42.531 [2024-07-25 13:56:51.697963] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61956 ] 00:08:42.531 { 00:08:42.531 "subsystems": [ 00:08:42.531 { 00:08:42.531 "subsystem": "bdev", 00:08:42.531 "config": [ 00:08:42.531 { 00:08:42.531 "params": { 00:08:42.531 "trtype": "pcie", 00:08:42.531 "traddr": "0000:00:10.0", 00:08:42.531 "name": "Nvme0" 00:08:42.531 }, 00:08:42.531 "method": "bdev_nvme_attach_controller" 00:08:42.531 }, 00:08:42.531 { 00:08:42.531 "method": "bdev_wait_for_examine" 00:08:42.531 } 00:08:42.531 ] 00:08:42.531 } 00:08:42.531 ] 00:08:42.531 } 00:08:42.794 [2024-07-25 13:56:51.837855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.794 [2024-07-25 13:56:51.935406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.794 [2024-07-25 13:56:51.977251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:43.054  Copying: 60/60 [kB] (average 58 MBps) 00:08:43.054 00:08:43.054 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.054 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:43.054 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:43.054 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:43.054 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:43.054 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:43.054 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:43.054 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:43.055 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:43.055 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:43.055 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:43.055 [2024-07-25 13:56:52.319628] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:43.055 [2024-07-25 13:56:52.319700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61977 ] 00:08:43.055 { 00:08:43.055 "subsystems": [ 00:08:43.055 { 00:08:43.055 "subsystem": "bdev", 00:08:43.055 "config": [ 00:08:43.055 { 00:08:43.055 "params": { 00:08:43.055 "trtype": "pcie", 00:08:43.055 "traddr": "0000:00:10.0", 00:08:43.055 "name": "Nvme0" 00:08:43.055 }, 00:08:43.055 "method": "bdev_nvme_attach_controller" 00:08:43.055 }, 00:08:43.055 { 00:08:43.055 "method": "bdev_wait_for_examine" 00:08:43.055 } 00:08:43.055 ] 00:08:43.055 } 00:08:43.055 ] 00:08:43.055 } 00:08:43.314 [2024-07-25 13:56:52.458437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.314 [2024-07-25 13:56:52.581547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.573 [2024-07-25 13:56:52.623353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:43.832  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:43.832 00:08:43.832 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:43.832 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:43.832 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:43.832 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:43.832 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:43.832 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:43.832 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:43.832 13:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:44.092 13:56:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:44.092 13:56:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:44.092 13:56:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:44.092 13:56:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:44.352 [2024-07-25 13:56:53.400766] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:44.352 [2024-07-25 13:56:53.400920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61996 ] 00:08:44.352 { 00:08:44.352 "subsystems": [ 00:08:44.352 { 00:08:44.352 "subsystem": "bdev", 00:08:44.352 "config": [ 00:08:44.352 { 00:08:44.352 "params": { 00:08:44.352 "trtype": "pcie", 00:08:44.352 "traddr": "0000:00:10.0", 00:08:44.352 "name": "Nvme0" 00:08:44.352 }, 00:08:44.352 "method": "bdev_nvme_attach_controller" 00:08:44.352 }, 00:08:44.352 { 00:08:44.352 "method": "bdev_wait_for_examine" 00:08:44.352 } 00:08:44.352 ] 00:08:44.352 } 00:08:44.352 ] 00:08:44.352 } 00:08:44.352 [2024-07-25 13:56:53.538321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.352 [2024-07-25 13:56:53.638714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.611 [2024-07-25 13:56:53.680220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:44.874  Copying: 56/56 [kB] (average 54 MBps) 00:08:44.874 00:08:44.874 13:56:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:44.874 13:56:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:44.874 13:56:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:44.874 13:56:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:44.874 { 00:08:44.874 "subsystems": [ 00:08:44.874 { 00:08:44.874 "subsystem": "bdev", 00:08:44.874 "config": [ 00:08:44.874 { 00:08:44.874 "params": { 00:08:44.874 "trtype": "pcie", 00:08:44.874 "traddr": "0000:00:10.0", 00:08:44.874 "name": "Nvme0" 00:08:44.874 }, 00:08:44.874 "method": "bdev_nvme_attach_controller" 00:08:44.874 }, 00:08:44.874 { 00:08:44.874 "method": "bdev_wait_for_examine" 00:08:44.874 } 00:08:44.874 ] 00:08:44.874 } 00:08:44.874 ] 00:08:44.874 } 00:08:44.874 [2024-07-25 13:56:54.016952] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:44.874 [2024-07-25 13:56:54.017086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62004 ] 00:08:44.874 [2024-07-25 13:56:54.155646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.134 [2024-07-25 13:56:54.257874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.134 [2024-07-25 13:56:54.302660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:45.393  Copying: 56/56 [kB] (average 27 MBps) 00:08:45.393 00:08:45.393 13:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:45.393 13:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:45.393 13:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:45.393 13:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:45.393 13:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:45.393 13:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:45.393 13:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:45.393 13:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:45.393 13:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:45.393 13:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:45.393 13:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:45.393 [2024-07-25 13:56:54.635094] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:45.393 [2024-07-25 13:56:54.635207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62025 ] 00:08:45.393 { 00:08:45.393 "subsystems": [ 00:08:45.393 { 00:08:45.393 "subsystem": "bdev", 00:08:45.393 "config": [ 00:08:45.393 { 00:08:45.393 "params": { 00:08:45.393 "trtype": "pcie", 00:08:45.393 "traddr": "0000:00:10.0", 00:08:45.393 "name": "Nvme0" 00:08:45.393 }, 00:08:45.393 "method": "bdev_nvme_attach_controller" 00:08:45.393 }, 00:08:45.393 { 00:08:45.393 "method": "bdev_wait_for_examine" 00:08:45.393 } 00:08:45.393 ] 00:08:45.393 } 00:08:45.393 ] 00:08:45.393 } 00:08:45.653 [2024-07-25 13:56:54.768193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.653 [2024-07-25 13:56:54.866620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.653 [2024-07-25 13:56:54.908209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:45.913  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:45.913 00:08:45.913 13:56:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:45.913 13:56:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:45.913 13:56:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:45.913 13:56:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:45.913 13:56:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:45.913 13:56:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:45.913 13:56:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:46.482 13:56:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:46.482 13:56:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:46.482 13:56:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:46.482 13:56:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:46.482 [2024-07-25 13:56:55.708210] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:46.482 [2024-07-25 13:56:55.708281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62044 ] 00:08:46.482 { 00:08:46.482 "subsystems": [ 00:08:46.482 { 00:08:46.482 "subsystem": "bdev", 00:08:46.482 "config": [ 00:08:46.482 { 00:08:46.482 "params": { 00:08:46.482 "trtype": "pcie", 00:08:46.482 "traddr": "0000:00:10.0", 00:08:46.482 "name": "Nvme0" 00:08:46.482 }, 00:08:46.482 "method": "bdev_nvme_attach_controller" 00:08:46.482 }, 00:08:46.482 { 00:08:46.482 "method": "bdev_wait_for_examine" 00:08:46.482 } 00:08:46.482 ] 00:08:46.482 } 00:08:46.482 ] 00:08:46.482 } 00:08:46.741 [2024-07-25 13:56:55.847925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.741 [2024-07-25 13:56:55.955973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.741 [2024-07-25 13:56:55.998882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:47.000  Copying: 56/56 [kB] (average 54 MBps) 00:08:47.000 00:08:47.000 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:47.000 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:47.000 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:47.000 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:47.260 [2024-07-25 13:56:56.334338] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:47.260 [2024-07-25 13:56:56.334399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62059 ] 00:08:47.260 { 00:08:47.260 "subsystems": [ 00:08:47.260 { 00:08:47.260 "subsystem": "bdev", 00:08:47.260 "config": [ 00:08:47.260 { 00:08:47.260 "params": { 00:08:47.260 "trtype": "pcie", 00:08:47.260 "traddr": "0000:00:10.0", 00:08:47.260 "name": "Nvme0" 00:08:47.260 }, 00:08:47.260 "method": "bdev_nvme_attach_controller" 00:08:47.260 }, 00:08:47.260 { 00:08:47.260 "method": "bdev_wait_for_examine" 00:08:47.260 } 00:08:47.260 ] 00:08:47.260 } 00:08:47.260 ] 00:08:47.260 } 00:08:47.260 [2024-07-25 13:56:56.458853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.260 [2024-07-25 13:56:56.559384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.519 [2024-07-25 13:56:56.602016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:47.778  Copying: 56/56 [kB] (average 54 MBps) 00:08:47.778 00:08:47.778 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:47.778 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:47.778 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:47.778 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:47.778 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:47.778 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:47.778 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:47.778 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:47.778 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:47.778 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:47.778 13:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:47.778 [2024-07-25 13:56:56.954408] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:47.778 [2024-07-25 13:56:56.954825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62075 ] 00:08:47.778 { 00:08:47.778 "subsystems": [ 00:08:47.778 { 00:08:47.778 "subsystem": "bdev", 00:08:47.778 "config": [ 00:08:47.778 { 00:08:47.778 "params": { 00:08:47.778 "trtype": "pcie", 00:08:47.778 "traddr": "0000:00:10.0", 00:08:47.778 "name": "Nvme0" 00:08:47.778 }, 00:08:47.778 "method": "bdev_nvme_attach_controller" 00:08:47.778 }, 00:08:47.778 { 00:08:47.778 "method": "bdev_wait_for_examine" 00:08:47.778 } 00:08:47.778 ] 00:08:47.778 } 00:08:47.778 ] 00:08:47.778 } 00:08:48.036 [2024-07-25 13:56:57.093139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.036 [2024-07-25 13:56:57.197441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.036 [2024-07-25 13:56:57.241104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:48.294  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:48.294 00:08:48.294 13:56:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:48.294 13:56:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:48.294 13:56:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:48.294 13:56:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:48.294 13:56:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:48.294 13:56:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:48.294 13:56:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:48.294 13:56:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:48.862 13:56:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:48.862 13:56:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:48.862 13:56:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:48.862 13:56:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:48.862 [2024-07-25 13:56:58.027248] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:48.862 [2024-07-25 13:56:58.027335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62094 ] 00:08:48.862 { 00:08:48.862 "subsystems": [ 00:08:48.862 { 00:08:48.862 "subsystem": "bdev", 00:08:48.862 "config": [ 00:08:48.862 { 00:08:48.862 "params": { 00:08:48.862 "trtype": "pcie", 00:08:48.862 "traddr": "0000:00:10.0", 00:08:48.862 "name": "Nvme0" 00:08:48.862 }, 00:08:48.862 "method": "bdev_nvme_attach_controller" 00:08:48.862 }, 00:08:48.862 { 00:08:48.862 "method": "bdev_wait_for_examine" 00:08:48.862 } 00:08:48.862 ] 00:08:48.862 } 00:08:48.862 ] 00:08:48.862 } 00:08:48.863 [2024-07-25 13:56:58.165761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.122 [2024-07-25 13:56:58.273241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.122 [2024-07-25 13:56:58.316582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:49.380  Copying: 48/48 [kB] (average 46 MBps) 00:08:49.380 00:08:49.380 13:56:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:49.380 13:56:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:49.380 13:56:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:49.380 13:56:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:49.380 [2024-07-25 13:56:58.652116] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:49.380 [2024-07-25 13:56:58.652191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62113 ] 00:08:49.380 { 00:08:49.380 "subsystems": [ 00:08:49.381 { 00:08:49.381 "subsystem": "bdev", 00:08:49.381 "config": [ 00:08:49.381 { 00:08:49.381 "params": { 00:08:49.381 "trtype": "pcie", 00:08:49.381 "traddr": "0000:00:10.0", 00:08:49.381 "name": "Nvme0" 00:08:49.381 }, 00:08:49.381 "method": "bdev_nvme_attach_controller" 00:08:49.381 }, 00:08:49.381 { 00:08:49.381 "method": "bdev_wait_for_examine" 00:08:49.381 } 00:08:49.381 ] 00:08:49.381 } 00:08:49.381 ] 00:08:49.381 } 00:08:49.640 [2024-07-25 13:56:58.791823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.640 [2024-07-25 13:56:58.896899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.640 [2024-07-25 13:56:58.940094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:50.192  Copying: 48/48 [kB] (average 46 MBps) 00:08:50.192 00:08:50.192 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:50.192 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:50.192 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:50.192 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:50.192 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:50.192 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:50.192 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:50.192 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:50.192 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:50.192 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:50.192 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:50.192 [2024-07-25 13:56:59.293572] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:50.192 [2024-07-25 13:56:59.293635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62123 ] 00:08:50.192 { 00:08:50.192 "subsystems": [ 00:08:50.192 { 00:08:50.192 "subsystem": "bdev", 00:08:50.192 "config": [ 00:08:50.192 { 00:08:50.192 "params": { 00:08:50.192 "trtype": "pcie", 00:08:50.192 "traddr": "0000:00:10.0", 00:08:50.192 "name": "Nvme0" 00:08:50.192 }, 00:08:50.192 "method": "bdev_nvme_attach_controller" 00:08:50.192 }, 00:08:50.192 { 00:08:50.192 "method": "bdev_wait_for_examine" 00:08:50.192 } 00:08:50.192 ] 00:08:50.192 } 00:08:50.192 ] 00:08:50.192 } 00:08:50.192 [2024-07-25 13:56:59.433187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.469 [2024-07-25 13:56:59.538111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.469 [2024-07-25 13:56:59.581369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:50.728  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:50.728 00:08:50.728 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:50.728 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:50.728 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:50.728 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:50.728 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:50.728 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:50.728 13:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:50.986 13:57:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:50.986 13:57:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:50.986 13:57:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:50.986 13:57:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:51.249 [2024-07-25 13:57:00.331928] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:51.249 [2024-07-25 13:57:00.332001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62142 ] 00:08:51.249 { 00:08:51.249 "subsystems": [ 00:08:51.249 { 00:08:51.249 "subsystem": "bdev", 00:08:51.249 "config": [ 00:08:51.249 { 00:08:51.249 "params": { 00:08:51.249 "trtype": "pcie", 00:08:51.249 "traddr": "0000:00:10.0", 00:08:51.249 "name": "Nvme0" 00:08:51.249 }, 00:08:51.249 "method": "bdev_nvme_attach_controller" 00:08:51.249 }, 00:08:51.249 { 00:08:51.249 "method": "bdev_wait_for_examine" 00:08:51.249 } 00:08:51.249 ] 00:08:51.249 } 00:08:51.249 ] 00:08:51.249 } 00:08:51.249 [2024-07-25 13:57:00.462578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.510 [2024-07-25 13:57:00.570131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.510 [2024-07-25 13:57:00.613252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:51.769  Copying: 48/48 [kB] (average 46 MBps) 00:08:51.769 00:08:51.769 13:57:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:51.769 13:57:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:51.769 13:57:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:51.769 13:57:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:51.769 [2024-07-25 13:57:00.950785] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:51.769 [2024-07-25 13:57:00.950863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62161 ] 00:08:51.769 { 00:08:51.769 "subsystems": [ 00:08:51.769 { 00:08:51.769 "subsystem": "bdev", 00:08:51.769 "config": [ 00:08:51.769 { 00:08:51.769 "params": { 00:08:51.769 "trtype": "pcie", 00:08:51.769 "traddr": "0000:00:10.0", 00:08:51.769 "name": "Nvme0" 00:08:51.769 }, 00:08:51.769 "method": "bdev_nvme_attach_controller" 00:08:51.769 }, 00:08:51.769 { 00:08:51.769 "method": "bdev_wait_for_examine" 00:08:51.769 } 00:08:51.769 ] 00:08:51.769 } 00:08:51.769 ] 00:08:51.769 } 00:08:52.027 [2024-07-25 13:57:01.089414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.027 [2024-07-25 13:57:01.194748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.027 [2024-07-25 13:57:01.238690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.287  Copying: 48/48 [kB] (average 46 MBps) 00:08:52.287 00:08:52.287 13:57:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:52.287 13:57:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:52.287 13:57:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:52.287 13:57:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:52.287 13:57:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:52.287 13:57:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:52.287 13:57:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:52.287 13:57:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:52.287 13:57:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:52.287 13:57:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:52.287 13:57:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:52.546 { 00:08:52.546 "subsystems": [ 00:08:52.546 { 00:08:52.546 "subsystem": "bdev", 00:08:52.546 "config": [ 00:08:52.546 { 00:08:52.546 "params": { 00:08:52.546 "trtype": "pcie", 00:08:52.546 "traddr": "0000:00:10.0", 00:08:52.546 "name": "Nvme0" 00:08:52.546 }, 00:08:52.546 "method": "bdev_nvme_attach_controller" 00:08:52.546 }, 00:08:52.546 { 00:08:52.546 "method": "bdev_wait_for_examine" 00:08:52.546 } 00:08:52.546 ] 00:08:52.546 } 00:08:52.546 ] 00:08:52.546 } 00:08:52.546 [2024-07-25 13:57:01.613749] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:52.546 [2024-07-25 13:57:01.613864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62181 ] 00:08:52.546 [2024-07-25 13:57:01.758726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.805 [2024-07-25 13:57:01.871898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.805 [2024-07-25 13:57:01.914683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:53.064  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:53.064 00:08:53.064 ************************************ 00:08:53.064 END TEST dd_rw 00:08:53.064 ************************************ 00:08:53.064 00:08:53.064 real 0m13.870s 00:08:53.064 user 0m10.337s 00:08:53.064 sys 0m4.609s 00:08:53.064 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.064 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:53.064 13:57:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:53.064 13:57:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.064 13:57:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.064 13:57:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:53.064 ************************************ 00:08:53.064 START TEST dd_rw_offset 00:08:53.064 ************************************ 00:08:53.064 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:08:53.064 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:53.064 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:53.064 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:53.064 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:53.064 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:53.065 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=kcfm5kllrw783yxi11i6k25y5uy1ov8tseb6w1vu4inhyq9l4kai23dbpqwzy3yjiqd0477avevjkcs3gzy5uta2clnh2oqx8eheiibpio3f2vrl8tz3aag82uga7vomhuim3c69y6remszp3kp0036ooa6w0teabk4tuj99knjyfxu9zlcgma8teqk94xlfzt13yi126neeerm1o0lhbi7rm0cy69h6gl76oq8ws6kgq6s17zhyetdgsrscflj4qo314yt1xwhyy3gejzdlt6mimp9r6eo46ycvitd070z3f797kna21wnenh6uci1p0cbp5ao5ps2l1ix1tanc0mzpt4bz78551l1lbetnnloa0f75ierct1scu3kzjlharwzkzipxhjy7j09j9wt3omoqt0x2toulsnqnhxa78izkutiqv7lgid2bphuj34e53pqxvvdev283srrxkspbzy9qodohxjm7vsupigk9c371m3s80751h1dhtv4w5ayhc8tv5n5tv5hl8ngsdhi8uahic4y56oi9sdi3snte7274rxvnbrbvjhbd5sskr9ww96zb2qgwzrm56qomc9387mv8zwigx5akr6xkji8oqwo8ga3avdvp33tfxv9ivv5rzbdq5v75cz6vyuo7i8ms8n7fprz78aq1zldfuuntljb7deol71l1hum3opwcoe025ti996tlwc2otf1a14g8fa6or9nnlmwixpqm4d14wvgh7z6m42m5w5xpj21i2xcvf6brwikdvi0fki1255n66bh6qw57j7uli98u9u3gal716dk7crdyq4u30c0j5tphcq6dexc1z3x1xlvpsoydl4lean6gmjlruw10u66zfv929eb6xw4ta25i0jjsl4maqxc7wziwymftzr55dwjsdhwd9xxrt7mbwcfvdfwk6nxklznzwnxkw7hnh55n36ejcasqg7uqdfcd8z7nifkg9jl9pc7lniizc8fkwwxcl41yp3gr0u7g2y8wtkqdkb47e297pr5uyqvo53rrbk8gs8p6mk6ucip5l14vbdq1ect8obh078x61ot9qrnga39vn7tasdvzh9spquwad3juqm6vjrqfriqsmf0eihxj5pkxp23782gckvkk9s9j0qgcn47itf3v5l1gdyfzak3nl29ochzf2sanrj4bwjsz8cezh1dikx8mn5b3s6x7png4xf4j90xbns5hr8zyukx9eec0qcxn77dst2xj551qmpe85cbvcz0osdq69jpm0rk033r6a5jc8mlm5dsf68f8b3hkhuv0y7jpe4fvjzhcn9xgdc46swr6qcu2ieakcslmhjg05q2t050dvmfdj9ss3f630mztffgmmefrlwwy5byegljs644j8jqwukftrqlczbgwah320pl000mx5acpdm6h5xhpau7wk7all92inlo9s7hzhnk2vyoxn2yflfa09x3sxo5juap0swqt479n1wi6lhtwsody94gwmh8c9919epkd8jlg5pa772d89jrr751rv1j99lhsqjim00cv1s3gbralg01plj0nt4epf1t3jk5bj92onnr8xfyna7oasr93vtb2ji3jyp6l3o1qr2zi9rcof168a3lp6gtnzfl574jttkfq2wxkxhrsxjotz0dg252nohs85fblljfbb8vci0fq9mwjpb8m4b6cnfsi573ofeo985ns7k0qwr576wiz92jrq1uiu2atp6xa62wxi3xm4s5dci1v48fxmkji30ffrj3gf64wb1cfy2en11w9p282tq0gkjjeosihqz02bmfeg0tdcshlowtnhegngn4wseclnspc41ncjse03vyyeg21xp2191y55byhhgtvtncze59upvx55u3nzm5ee10hwk94p08du71v6uzwqjx4mbx3mq19797trqgy7ohx8b65zo4nge9ls0wd6ep3ru4mop6gxefn7u52tnqwns7u53v5ifa9nx2f4go1jcop4aqk6zauyqcygnly5cg1km5h66qj5iml11f9vay70prmiy7by9mbp6hah9b7vh50lq9geds4maiktz5uzm858nv41e3dbamoa3wb9hvjrtk8cszzjs70uw7tcxsn93fi9u0rpudmhs8qpjr9mt0z7ejilzio2tvg6qydkf1pxm5i7e34yk61cv45eu85edfjgwy7pqbejq2ct8rh3mjgpx3sopfqqw9ppbgxoj9d08ois4ihhfkjsblztmn8km0vo56oup2gfyupqnpul2o0kn8dnvzbks71d80wqcomc5uc3m8440xlfeh0vng534wgwek89tgtqewy5c1oew0q8hpuof78sz3tzd2eio19tn8pfjwijfagl7wvz6s030djk4zsjpi6kpk7684dfjh9yhmrv1ehr2bf4i4g5fomjgjljzul87ncxx97gl7635y50s9doca92vueu2t7h6ext9i31xa2h38wygrsfkkmniakv8b0p122evkxjv93yt6pvl0wp9ytiwi159qadciaijs7vaxfyrn2qspfae6xmavnq80wupbwh0w8im5jgqivoadmykj3ub49gc2v3ccenoxvwout2uhxxlksrl5ryzu938lsx61u694qhtvzt9sydwj3fkr9dyhkyxhjxina30k7xml7qeh3bx9akffvfxys6cdk2fk9voxxu6bm5li4f4vn2p9jhi0dv86kz29f0ez5inqsr56pk68wjj87270mn0whc3cwj7wasd5sef14zk0nb65q6uyy6linklputvmi71eeg0zcq2kmeyg8649agq86cl2tcwszm434h91xru7augx2amct7mfebxbslaz5nc6l6ibtig2pq5en1o670qy4suptmhr9fj0r5fgug6kz42zv07xf985hqrvs2lktso09n24uqhihrbzn4gbblieg3kkykd6v2menn4jdcixvxgsgytsk6tlmsqnibmdtf2svxb2oy17gxwg29d9pubdm7tgajla5b00rveuwdr1bsvnrs16of2hfow1dzhp7i15lyp3ybl266jogiz3bi4z7fusvh5fa0mv9fo5cmqsa6gia3ok6owjj3urnlu9aasu2l74c54etfdj8vk1lpakrx7dzo2aac9drudkktlomd6r7ziwr7yb8qcmdng64vzaxuifjen4ads8fnfru1msco1l86o4uf72pz1cwiu6s16ahb73qafdmm4e4qcl7ga2e5ae6h271wtbvdoi8f1kyuamnlva3ovhngqf92knt4up0bvhuza2njcjiphabsf70bco63hsd7stcu2v31154eg7qfotxe3m1mrr2hqvkdx8ku4uin06t6ttpfhpor7fjzmw6ljel9h7tpsgaaa40b7mvk8k3uje86zl6qss8cx5ldcibh8m4y0zzjzkyu5q6bzcje7el4a9hhc1psv6j42ef72yvz5bib1md09lpe7r6u7xjixruwj8vzm59ug3yh0gldyn0s20u6f2bv9bvz7g0cup2lajfdcnvq6jnrka6drpa9tnseyz0l9wkjmnscq4wwdkdpbz9hnbu94y1yawwxxegbajwfhfc1v70uy1wdll9guclvbkesjzhrjkrpi3c3g6i4lwb1v74ni3uqes7shlgi0swqyhb4dvn5fginmzl4kqfj748jof12y8qmzhfw25ov7qib7f0nd8ntcjfvx4noytqbcmbzyyhwn8a5oqxzwbmi8m46lo6jksr3u7z236ug5vbdb73zvrg2wuj06jc0vfrvcm3p2umpwlad49tfl3n1jlhir6r1iqk2i1a3hv8pzpwcm5qjn26z8qgutbhq21yd7lh3wxyb853q6k6kg7y9jaig8qp80u5qt7eqma3x2n53hdkxac3j16q4wc9d9433qamta9ia7a9bf5a0r5wulp32d0gp0lnpvu9bqgdu5kku8m59d8fnjbclqf16zc0vqswki31gbs9zmfyuy8dx6lyz851zh80hj4b2lwviyocnzh1wbkv91ilwrgiho3sz8hxefn3unuknwmqjohg0d79wxoni9opd1mhx3184pfv 00:08:53.065 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:53.065 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:53.065 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:53.065 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:53.324 [2024-07-25 13:57:02.371674] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:53.324 [2024-07-25 13:57:02.371830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62207 ] 00:08:53.324 { 00:08:53.324 "subsystems": [ 00:08:53.324 { 00:08:53.324 "subsystem": "bdev", 00:08:53.324 "config": [ 00:08:53.324 { 00:08:53.324 "params": { 00:08:53.324 "trtype": "pcie", 00:08:53.324 "traddr": "0000:00:10.0", 00:08:53.324 "name": "Nvme0" 00:08:53.324 }, 00:08:53.324 "method": "bdev_nvme_attach_controller" 00:08:53.324 }, 00:08:53.324 { 00:08:53.324 "method": "bdev_wait_for_examine" 00:08:53.324 } 00:08:53.324 ] 00:08:53.324 } 00:08:53.324 ] 00:08:53.324 } 00:08:53.324 [2024-07-25 13:57:02.512254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.582 [2024-07-25 13:57:02.634020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.582 [2024-07-25 13:57:02.676995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:53.841  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:53.841 00:08:53.841 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:53.841 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:53.841 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:53.841 13:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:53.841 [2024-07-25 13:57:03.021852] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:53.841 [2024-07-25 13:57:03.021923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62226 ] 00:08:53.841 { 00:08:53.841 "subsystems": [ 00:08:53.841 { 00:08:53.841 "subsystem": "bdev", 00:08:53.841 "config": [ 00:08:53.841 { 00:08:53.841 "params": { 00:08:53.841 "trtype": "pcie", 00:08:53.841 "traddr": "0000:00:10.0", 00:08:53.841 "name": "Nvme0" 00:08:53.841 }, 00:08:53.841 "method": "bdev_nvme_attach_controller" 00:08:53.841 }, 00:08:53.841 { 00:08:53.841 "method": "bdev_wait_for_examine" 00:08:53.841 } 00:08:53.841 ] 00:08:53.841 } 00:08:53.841 ] 00:08:53.841 } 00:08:54.099 [2024-07-25 13:57:03.167215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.099 [2024-07-25 13:57:03.278629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.099 [2024-07-25 13:57:03.321536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:54.358  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:54.358 00:08:54.358 13:57:03 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:54.358 ************************************ 00:08:54.358 END TEST dd_rw_offset 00:08:54.358 ************************************ 00:08:54.359 13:57:03 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ kcfm5kllrw783yxi11i6k25y5uy1ov8tseb6w1vu4inhyq9l4kai23dbpqwzy3yjiqd0477avevjkcs3gzy5uta2clnh2oqx8eheiibpio3f2vrl8tz3aag82uga7vomhuim3c69y6remszp3kp0036ooa6w0teabk4tuj99knjyfxu9zlcgma8teqk94xlfzt13yi126neeerm1o0lhbi7rm0cy69h6gl76oq8ws6kgq6s17zhyetdgsrscflj4qo314yt1xwhyy3gejzdlt6mimp9r6eo46ycvitd070z3f797kna21wnenh6uci1p0cbp5ao5ps2l1ix1tanc0mzpt4bz78551l1lbetnnloa0f75ierct1scu3kzjlharwzkzipxhjy7j09j9wt3omoqt0x2toulsnqnhxa78izkutiqv7lgid2bphuj34e53pqxvvdev283srrxkspbzy9qodohxjm7vsupigk9c371m3s80751h1dhtv4w5ayhc8tv5n5tv5hl8ngsdhi8uahic4y56oi9sdi3snte7274rxvnbrbvjhbd5sskr9ww96zb2qgwzrm56qomc9387mv8zwigx5akr6xkji8oqwo8ga3avdvp33tfxv9ivv5rzbdq5v75cz6vyuo7i8ms8n7fprz78aq1zldfuuntljb7deol71l1hum3opwcoe025ti996tlwc2otf1a14g8fa6or9nnlmwixpqm4d14wvgh7z6m42m5w5xpj21i2xcvf6brwikdvi0fki1255n66bh6qw57j7uli98u9u3gal716dk7crdyq4u30c0j5tphcq6dexc1z3x1xlvpsoydl4lean6gmjlruw10u66zfv929eb6xw4ta25i0jjsl4maqxc7wziwymftzr55dwjsdhwd9xxrt7mbwcfvdfwk6nxklznzwnxkw7hnh55n36ejcasqg7uqdfcd8z7nifkg9jl9pc7lniizc8fkwwxcl41yp3gr0u7g2y8wtkqdkb47e297pr5uyqvo53rrbk8gs8p6mk6ucip5l14vbdq1ect8obh078x61ot9qrnga39vn7tasdvzh9spquwad3juqm6vjrqfriqsmf0eihxj5pkxp23782gckvkk9s9j0qgcn47itf3v5l1gdyfzak3nl29ochzf2sanrj4bwjsz8cezh1dikx8mn5b3s6x7png4xf4j90xbns5hr8zyukx9eec0qcxn77dst2xj551qmpe85cbvcz0osdq69jpm0rk033r6a5jc8mlm5dsf68f8b3hkhuv0y7jpe4fvjzhcn9xgdc46swr6qcu2ieakcslmhjg05q2t050dvmfdj9ss3f630mztffgmmefrlwwy5byegljs644j8jqwukftrqlczbgwah320pl000mx5acpdm6h5xhpau7wk7all92inlo9s7hzhnk2vyoxn2yflfa09x3sxo5juap0swqt479n1wi6lhtwsody94gwmh8c9919epkd8jlg5pa772d89jrr751rv1j99lhsqjim00cv1s3gbralg01plj0nt4epf1t3jk5bj92onnr8xfyna7oasr93vtb2ji3jyp6l3o1qr2zi9rcof168a3lp6gtnzfl574jttkfq2wxkxhrsxjotz0dg252nohs85fblljfbb8vci0fq9mwjpb8m4b6cnfsi573ofeo985ns7k0qwr576wiz92jrq1uiu2atp6xa62wxi3xm4s5dci1v48fxmkji30ffrj3gf64wb1cfy2en11w9p282tq0gkjjeosihqz02bmfeg0tdcshlowtnhegngn4wseclnspc41ncjse03vyyeg21xp2191y55byhhgtvtncze59upvx55u3nzm5ee10hwk94p08du71v6uzwqjx4mbx3mq19797trqgy7ohx8b65zo4nge9ls0wd6ep3ru4mop6gxefn7u52tnqwns7u53v5ifa9nx2f4go1jcop4aqk6zauyqcygnly5cg1km5h66qj5iml11f9vay70prmiy7by9mbp6hah9b7vh50lq9geds4maiktz5uzm858nv41e3dbamoa3wb9hvjrtk8cszzjs70uw7tcxsn93fi9u0rpudmhs8qpjr9mt0z7ejilzio2tvg6qydkf1pxm5i7e34yk61cv45eu85edfjgwy7pqbejq2ct8rh3mjgpx3sopfqqw9ppbgxoj9d08ois4ihhfkjsblztmn8km0vo56oup2gfyupqnpul2o0kn8dnvzbks71d80wqcomc5uc3m8440xlfeh0vng534wgwek89tgtqewy5c1oew0q8hpuof78sz3tzd2eio19tn8pfjwijfagl7wvz6s030djk4zsjpi6kpk7684dfjh9yhmrv1ehr2bf4i4g5fomjgjljzul87ncxx97gl7635y50s9doca92vueu2t7h6ext9i31xa2h38wygrsfkkmniakv8b0p122evkxjv93yt6pvl0wp9ytiwi159qadciaijs7vaxfyrn2qspfae6xmavnq80wupbwh0w8im5jgqivoadmykj3ub49gc2v3ccenoxvwout2uhxxlksrl5ryzu938lsx61u694qhtvzt9sydwj3fkr9dyhkyxhjxina30k7xml7qeh3bx9akffvfxys6cdk2fk9voxxu6bm5li4f4vn2p9jhi0dv86kz29f0ez5inqsr56pk68wjj87270mn0whc3cwj7wasd5sef14zk0nb65q6uyy6linklputvmi71eeg0zcq2kmeyg8649agq86cl2tcwszm434h91xru7augx2amct7mfebxbslaz5nc6l6ibtig2pq5en1o670qy4suptmhr9fj0r5fgug6kz42zv07xf985hqrvs2lktso09n24uqhihrbzn4gbblieg3kkykd6v2menn4jdcixvxgsgytsk6tlmsqnibmdtf2svxb2oy17gxwg29d9pubdm7tgajla5b00rveuwdr1bsvnrs16of2hfow1dzhp7i15lyp3ybl266jogiz3bi4z7fusvh5fa0mv9fo5cmqsa6gia3ok6owjj3urnlu9aasu2l74c54etfdj8vk1lpakrx7dzo2aac9drudkktlomd6r7ziwr7yb8qcmdng64vzaxuifjen4ads8fnfru1msco1l86o4uf72pz1cwiu6s16ahb73qafdmm4e4qcl7ga2e5ae6h271wtbvdoi8f1kyuamnlva3ovhngqf92knt4up0bvhuza2njcjiphabsf70bco63hsd7stcu2v31154eg7qfotxe3m1mrr2hqvkdx8ku4uin06t6ttpfhpor7fjzmw6ljel9h7tpsgaaa40b7mvk8k3uje86zl6qss8cx5ldcibh8m4y0zzjzkyu5q6bzcje7el4a9hhc1psv6j42ef72yvz5bib1md09lpe7r6u7xjixruwj8vzm59ug3yh0gldyn0s20u6f2bv9bvz7g0cup2lajfdcnvq6jnrka6drpa9tnseyz0l9wkjmnscq4wwdkdpbz9hnbu94y1yawwxxegbajwfhfc1v70uy1wdll9guclvbkesjzhrjkrpi3c3g6i4lwb1v74ni3uqes7shlgi0swqyhb4dvn5fginmzl4kqfj748jof12y8qmzhfw25ov7qib7f0nd8ntcjfvx4noytqbcmbzyyhwn8a5oqxzwbmi8m46lo6jksr3u7z236ug5vbdb73zvrg2wuj06jc0vfrvcm3p2umpwlad49tfl3n1jlhir6r1iqk2i1a3hv8pzpwcm5qjn26z8qgutbhq21yd7lh3wxyb853q6k6kg7y9jaig8qp80u5qt7eqma3x2n53hdkxac3j16q4wc9d9433qamta9ia7a9bf5a0r5wulp32d0gp0lnpvu9bqgdu5kku8m59d8fnjbclqf16zc0vqswki31gbs9zmfyuy8dx6lyz851zh80hj4b2lwviyocnzh1wbkv91ilwrgiho3sz8hxefn3unuknwmqjohg0d79wxoni9opd1mhx3184pfv == \k\c\f\m\5\k\l\l\r\w\7\8\3\y\x\i\1\1\i\6\k\2\5\y\5\u\y\1\o\v\8\t\s\e\b\6\w\1\v\u\4\i\n\h\y\q\9\l\4\k\a\i\2\3\d\b\p\q\w\z\y\3\y\j\i\q\d\0\4\7\7\a\v\e\v\j\k\c\s\3\g\z\y\5\u\t\a\2\c\l\n\h\2\o\q\x\8\e\h\e\i\i\b\p\i\o\3\f\2\v\r\l\8\t\z\3\a\a\g\8\2\u\g\a\7\v\o\m\h\u\i\m\3\c\6\9\y\6\r\e\m\s\z\p\3\k\p\0\0\3\6\o\o\a\6\w\0\t\e\a\b\k\4\t\u\j\9\9\k\n\j\y\f\x\u\9\z\l\c\g\m\a\8\t\e\q\k\9\4\x\l\f\z\t\1\3\y\i\1\2\6\n\e\e\e\r\m\1\o\0\l\h\b\i\7\r\m\0\c\y\6\9\h\6\g\l\7\6\o\q\8\w\s\6\k\g\q\6\s\1\7\z\h\y\e\t\d\g\s\r\s\c\f\l\j\4\q\o\3\1\4\y\t\1\x\w\h\y\y\3\g\e\j\z\d\l\t\6\m\i\m\p\9\r\6\e\o\4\6\y\c\v\i\t\d\0\7\0\z\3\f\7\9\7\k\n\a\2\1\w\n\e\n\h\6\u\c\i\1\p\0\c\b\p\5\a\o\5\p\s\2\l\1\i\x\1\t\a\n\c\0\m\z\p\t\4\b\z\7\8\5\5\1\l\1\l\b\e\t\n\n\l\o\a\0\f\7\5\i\e\r\c\t\1\s\c\u\3\k\z\j\l\h\a\r\w\z\k\z\i\p\x\h\j\y\7\j\0\9\j\9\w\t\3\o\m\o\q\t\0\x\2\t\o\u\l\s\n\q\n\h\x\a\7\8\i\z\k\u\t\i\q\v\7\l\g\i\d\2\b\p\h\u\j\3\4\e\5\3\p\q\x\v\v\d\e\v\2\8\3\s\r\r\x\k\s\p\b\z\y\9\q\o\d\o\h\x\j\m\7\v\s\u\p\i\g\k\9\c\3\7\1\m\3\s\8\0\7\5\1\h\1\d\h\t\v\4\w\5\a\y\h\c\8\t\v\5\n\5\t\v\5\h\l\8\n\g\s\d\h\i\8\u\a\h\i\c\4\y\5\6\o\i\9\s\d\i\3\s\n\t\e\7\2\7\4\r\x\v\n\b\r\b\v\j\h\b\d\5\s\s\k\r\9\w\w\9\6\z\b\2\q\g\w\z\r\m\5\6\q\o\m\c\9\3\8\7\m\v\8\z\w\i\g\x\5\a\k\r\6\x\k\j\i\8\o\q\w\o\8\g\a\3\a\v\d\v\p\3\3\t\f\x\v\9\i\v\v\5\r\z\b\d\q\5\v\7\5\c\z\6\v\y\u\o\7\i\8\m\s\8\n\7\f\p\r\z\7\8\a\q\1\z\l\d\f\u\u\n\t\l\j\b\7\d\e\o\l\7\1\l\1\h\u\m\3\o\p\w\c\o\e\0\2\5\t\i\9\9\6\t\l\w\c\2\o\t\f\1\a\1\4\g\8\f\a\6\o\r\9\n\n\l\m\w\i\x\p\q\m\4\d\1\4\w\v\g\h\7\z\6\m\4\2\m\5\w\5\x\p\j\2\1\i\2\x\c\v\f\6\b\r\w\i\k\d\v\i\0\f\k\i\1\2\5\5\n\6\6\b\h\6\q\w\5\7\j\7\u\l\i\9\8\u\9\u\3\g\a\l\7\1\6\d\k\7\c\r\d\y\q\4\u\3\0\c\0\j\5\t\p\h\c\q\6\d\e\x\c\1\z\3\x\1\x\l\v\p\s\o\y\d\l\4\l\e\a\n\6\g\m\j\l\r\u\w\1\0\u\6\6\z\f\v\9\2\9\e\b\6\x\w\4\t\a\2\5\i\0\j\j\s\l\4\m\a\q\x\c\7\w\z\i\w\y\m\f\t\z\r\5\5\d\w\j\s\d\h\w\d\9\x\x\r\t\7\m\b\w\c\f\v\d\f\w\k\6\n\x\k\l\z\n\z\w\n\x\k\w\7\h\n\h\5\5\n\3\6\e\j\c\a\s\q\g\7\u\q\d\f\c\d\8\z\7\n\i\f\k\g\9\j\l\9\p\c\7\l\n\i\i\z\c\8\f\k\w\w\x\c\l\4\1\y\p\3\g\r\0\u\7\g\2\y\8\w\t\k\q\d\k\b\4\7\e\2\9\7\p\r\5\u\y\q\v\o\5\3\r\r\b\k\8\g\s\8\p\6\m\k\6\u\c\i\p\5\l\1\4\v\b\d\q\1\e\c\t\8\o\b\h\0\7\8\x\6\1\o\t\9\q\r\n\g\a\3\9\v\n\7\t\a\s\d\v\z\h\9\s\p\q\u\w\a\d\3\j\u\q\m\6\v\j\r\q\f\r\i\q\s\m\f\0\e\i\h\x\j\5\p\k\x\p\2\3\7\8\2\g\c\k\v\k\k\9\s\9\j\0\q\g\c\n\4\7\i\t\f\3\v\5\l\1\g\d\y\f\z\a\k\3\n\l\2\9\o\c\h\z\f\2\s\a\n\r\j\4\b\w\j\s\z\8\c\e\z\h\1\d\i\k\x\8\m\n\5\b\3\s\6\x\7\p\n\g\4\x\f\4\j\9\0\x\b\n\s\5\h\r\8\z\y\u\k\x\9\e\e\c\0\q\c\x\n\7\7\d\s\t\2\x\j\5\5\1\q\m\p\e\8\5\c\b\v\c\z\0\o\s\d\q\6\9\j\p\m\0\r\k\0\3\3\r\6\a\5\j\c\8\m\l\m\5\d\s\f\6\8\f\8\b\3\h\k\h\u\v\0\y\7\j\p\e\4\f\v\j\z\h\c\n\9\x\g\d\c\4\6\s\w\r\6\q\c\u\2\i\e\a\k\c\s\l\m\h\j\g\0\5\q\2\t\0\5\0\d\v\m\f\d\j\9\s\s\3\f\6\3\0\m\z\t\f\f\g\m\m\e\f\r\l\w\w\y\5\b\y\e\g\l\j\s\6\4\4\j\8\j\q\w\u\k\f\t\r\q\l\c\z\b\g\w\a\h\3\2\0\p\l\0\0\0\m\x\5\a\c\p\d\m\6\h\5\x\h\p\a\u\7\w\k\7\a\l\l\9\2\i\n\l\o\9\s\7\h\z\h\n\k\2\v\y\o\x\n\2\y\f\l\f\a\0\9\x\3\s\x\o\5\j\u\a\p\0\s\w\q\t\4\7\9\n\1\w\i\6\l\h\t\w\s\o\d\y\9\4\g\w\m\h\8\c\9\9\1\9\e\p\k\d\8\j\l\g\5\p\a\7\7\2\d\8\9\j\r\r\7\5\1\r\v\1\j\9\9\l\h\s\q\j\i\m\0\0\c\v\1\s\3\g\b\r\a\l\g\0\1\p\l\j\0\n\t\4\e\p\f\1\t\3\j\k\5\b\j\9\2\o\n\n\r\8\x\f\y\n\a\7\o\a\s\r\9\3\v\t\b\2\j\i\3\j\y\p\6\l\3\o\1\q\r\2\z\i\9\r\c\o\f\1\6\8\a\3\l\p\6\g\t\n\z\f\l\5\7\4\j\t\t\k\f\q\2\w\x\k\x\h\r\s\x\j\o\t\z\0\d\g\2\5\2\n\o\h\s\8\5\f\b\l\l\j\f\b\b\8\v\c\i\0\f\q\9\m\w\j\p\b\8\m\4\b\6\c\n\f\s\i\5\7\3\o\f\e\o\9\8\5\n\s\7\k\0\q\w\r\5\7\6\w\i\z\9\2\j\r\q\1\u\i\u\2\a\t\p\6\x\a\6\2\w\x\i\3\x\m\4\s\5\d\c\i\1\v\4\8\f\x\m\k\j\i\3\0\f\f\r\j\3\g\f\6\4\w\b\1\c\f\y\2\e\n\1\1\w\9\p\2\8\2\t\q\0\g\k\j\j\e\o\s\i\h\q\z\0\2\b\m\f\e\g\0\t\d\c\s\h\l\o\w\t\n\h\e\g\n\g\n\4\w\s\e\c\l\n\s\p\c\4\1\n\c\j\s\e\0\3\v\y\y\e\g\2\1\x\p\2\1\9\1\y\5\5\b\y\h\h\g\t\v\t\n\c\z\e\5\9\u\p\v\x\5\5\u\3\n\z\m\5\e\e\1\0\h\w\k\9\4\p\0\8\d\u\7\1\v\6\u\z\w\q\j\x\4\m\b\x\3\m\q\1\9\7\9\7\t\r\q\g\y\7\o\h\x\8\b\6\5\z\o\4\n\g\e\9\l\s\0\w\d\6\e\p\3\r\u\4\m\o\p\6\g\x\e\f\n\7\u\5\2\t\n\q\w\n\s\7\u\5\3\v\5\i\f\a\9\n\x\2\f\4\g\o\1\j\c\o\p\4\a\q\k\6\z\a\u\y\q\c\y\g\n\l\y\5\c\g\1\k\m\5\h\6\6\q\j\5\i\m\l\1\1\f\9\v\a\y\7\0\p\r\m\i\y\7\b\y\9\m\b\p\6\h\a\h\9\b\7\v\h\5\0\l\q\9\g\e\d\s\4\m\a\i\k\t\z\5\u\z\m\8\5\8\n\v\4\1\e\3\d\b\a\m\o\a\3\w\b\9\h\v\j\r\t\k\8\c\s\z\z\j\s\7\0\u\w\7\t\c\x\s\n\9\3\f\i\9\u\0\r\p\u\d\m\h\s\8\q\p\j\r\9\m\t\0\z\7\e\j\i\l\z\i\o\2\t\v\g\6\q\y\d\k\f\1\p\x\m\5\i\7\e\3\4\y\k\6\1\c\v\4\5\e\u\8\5\e\d\f\j\g\w\y\7\p\q\b\e\j\q\2\c\t\8\r\h\3\m\j\g\p\x\3\s\o\p\f\q\q\w\9\p\p\b\g\x\o\j\9\d\0\8\o\i\s\4\i\h\h\f\k\j\s\b\l\z\t\m\n\8\k\m\0\v\o\5\6\o\u\p\2\g\f\y\u\p\q\n\p\u\l\2\o\0\k\n\8\d\n\v\z\b\k\s\7\1\d\8\0\w\q\c\o\m\c\5\u\c\3\m\8\4\4\0\x\l\f\e\h\0\v\n\g\5\3\4\w\g\w\e\k\8\9\t\g\t\q\e\w\y\5\c\1\o\e\w\0\q\8\h\p\u\o\f\7\8\s\z\3\t\z\d\2\e\i\o\1\9\t\n\8\p\f\j\w\i\j\f\a\g\l\7\w\v\z\6\s\0\3\0\d\j\k\4\z\s\j\p\i\6\k\p\k\7\6\8\4\d\f\j\h\9\y\h\m\r\v\1\e\h\r\2\b\f\4\i\4\g\5\f\o\m\j\g\j\l\j\z\u\l\8\7\n\c\x\x\9\7\g\l\7\6\3\5\y\5\0\s\9\d\o\c\a\9\2\v\u\e\u\2\t\7\h\6\e\x\t\9\i\3\1\x\a\2\h\3\8\w\y\g\r\s\f\k\k\m\n\i\a\k\v\8\b\0\p\1\2\2\e\v\k\x\j\v\9\3\y\t\6\p\v\l\0\w\p\9\y\t\i\w\i\1\5\9\q\a\d\c\i\a\i\j\s\7\v\a\x\f\y\r\n\2\q\s\p\f\a\e\6\x\m\a\v\n\q\8\0\w\u\p\b\w\h\0\w\8\i\m\5\j\g\q\i\v\o\a\d\m\y\k\j\3\u\b\4\9\g\c\2\v\3\c\c\e\n\o\x\v\w\o\u\t\2\u\h\x\x\l\k\s\r\l\5\r\y\z\u\9\3\8\l\s\x\6\1\u\6\9\4\q\h\t\v\z\t\9\s\y\d\w\j\3\f\k\r\9\d\y\h\k\y\x\h\j\x\i\n\a\3\0\k\7\x\m\l\7\q\e\h\3\b\x\9\a\k\f\f\v\f\x\y\s\6\c\d\k\2\f\k\9\v\o\x\x\u\6\b\m\5\l\i\4\f\4\v\n\2\p\9\j\h\i\0\d\v\8\6\k\z\2\9\f\0\e\z\5\i\n\q\s\r\5\6\p\k\6\8\w\j\j\8\7\2\7\0\m\n\0\w\h\c\3\c\w\j\7\w\a\s\d\5\s\e\f\1\4\z\k\0\n\b\6\5\q\6\u\y\y\6\l\i\n\k\l\p\u\t\v\m\i\7\1\e\e\g\0\z\c\q\2\k\m\e\y\g\8\6\4\9\a\g\q\8\6\c\l\2\t\c\w\s\z\m\4\3\4\h\9\1\x\r\u\7\a\u\g\x\2\a\m\c\t\7\m\f\e\b\x\b\s\l\a\z\5\n\c\6\l\6\i\b\t\i\g\2\p\q\5\e\n\1\o\6\7\0\q\y\4\s\u\p\t\m\h\r\9\f\j\0\r\5\f\g\u\g\6\k\z\4\2\z\v\0\7\x\f\9\8\5\h\q\r\v\s\2\l\k\t\s\o\0\9\n\2\4\u\q\h\i\h\r\b\z\n\4\g\b\b\l\i\e\g\3\k\k\y\k\d\6\v\2\m\e\n\n\4\j\d\c\i\x\v\x\g\s\g\y\t\s\k\6\t\l\m\s\q\n\i\b\m\d\t\f\2\s\v\x\b\2\o\y\1\7\g\x\w\g\2\9\d\9\p\u\b\d\m\7\t\g\a\j\l\a\5\b\0\0\r\v\e\u\w\d\r\1\b\s\v\n\r\s\1\6\o\f\2\h\f\o\w\1\d\z\h\p\7\i\1\5\l\y\p\3\y\b\l\2\6\6\j\o\g\i\z\3\b\i\4\z\7\f\u\s\v\h\5\f\a\0\m\v\9\f\o\5\c\m\q\s\a\6\g\i\a\3\o\k\6\o\w\j\j\3\u\r\n\l\u\9\a\a\s\u\2\l\7\4\c\5\4\e\t\f\d\j\8\v\k\1\l\p\a\k\r\x\7\d\z\o\2\a\a\c\9\d\r\u\d\k\k\t\l\o\m\d\6\r\7\z\i\w\r\7\y\b\8\q\c\m\d\n\g\6\4\v\z\a\x\u\i\f\j\e\n\4\a\d\s\8\f\n\f\r\u\1\m\s\c\o\1\l\8\6\o\4\u\f\7\2\p\z\1\c\w\i\u\6\s\1\6\a\h\b\7\3\q\a\f\d\m\m\4\e\4\q\c\l\7\g\a\2\e\5\a\e\6\h\2\7\1\w\t\b\v\d\o\i\8\f\1\k\y\u\a\m\n\l\v\a\3\o\v\h\n\g\q\f\9\2\k\n\t\4\u\p\0\b\v\h\u\z\a\2\n\j\c\j\i\p\h\a\b\s\f\7\0\b\c\o\6\3\h\s\d\7\s\t\c\u\2\v\3\1\1\5\4\e\g\7\q\f\o\t\x\e\3\m\1\m\r\r\2\h\q\v\k\d\x\8\k\u\4\u\i\n\0\6\t\6\t\t\p\f\h\p\o\r\7\f\j\z\m\w\6\l\j\e\l\9\h\7\t\p\s\g\a\a\a\4\0\b\7\m\v\k\8\k\3\u\j\e\8\6\z\l\6\q\s\s\8\c\x\5\l\d\c\i\b\h\8\m\4\y\0\z\z\j\z\k\y\u\5\q\6\b\z\c\j\e\7\e\l\4\a\9\h\h\c\1\p\s\v\6\j\4\2\e\f\7\2\y\v\z\5\b\i\b\1\m\d\0\9\l\p\e\7\r\6\u\7\x\j\i\x\r\u\w\j\8\v\z\m\5\9\u\g\3\y\h\0\g\l\d\y\n\0\s\2\0\u\6\f\2\b\v\9\b\v\z\7\g\0\c\u\p\2\l\a\j\f\d\c\n\v\q\6\j\n\r\k\a\6\d\r\p\a\9\t\n\s\e\y\z\0\l\9\w\k\j\m\n\s\c\q\4\w\w\d\k\d\p\b\z\9\h\n\b\u\9\4\y\1\y\a\w\w\x\x\e\g\b\a\j\w\f\h\f\c\1\v\7\0\u\y\1\w\d\l\l\9\g\u\c\l\v\b\k\e\s\j\z\h\r\j\k\r\p\i\3\c\3\g\6\i\4\l\w\b\1\v\7\4\n\i\3\u\q\e\s\7\s\h\l\g\i\0\s\w\q\y\h\b\4\d\v\n\5\f\g\i\n\m\z\l\4\k\q\f\j\7\4\8\j\o\f\1\2\y\8\q\m\z\h\f\w\2\5\o\v\7\q\i\b\7\f\0\n\d\8\n\t\c\j\f\v\x\4\n\o\y\t\q\b\c\m\b\z\y\y\h\w\n\8\a\5\o\q\x\z\w\b\m\i\8\m\4\6\l\o\6\j\k\s\r\3\u\7\z\2\3\6\u\g\5\v\b\d\b\7\3\z\v\r\g\2\w\u\j\0\6\j\c\0\v\f\r\v\c\m\3\p\2\u\m\p\w\l\a\d\4\9\t\f\l\3\n\1\j\l\h\i\r\6\r\1\i\q\k\2\i\1\a\3\h\v\8\p\z\p\w\c\m\5\q\j\n\2\6\z\8\q\g\u\t\b\h\q\2\1\y\d\7\l\h\3\w\x\y\b\8\5\3\q\6\k\6\k\g\7\y\9\j\a\i\g\8\q\p\8\0\u\5\q\t\7\e\q\m\a\3\x\2\n\5\3\h\d\k\x\a\c\3\j\1\6\q\4\w\c\9\d\9\4\3\3\q\a\m\t\a\9\i\a\7\a\9\b\f\5\a\0\r\5\w\u\l\p\3\2\d\0\g\p\0\l\n\p\v\u\9\b\q\g\d\u\5\k\k\u\8\m\5\9\d\8\f\n\j\b\c\l\q\f\1\6\z\c\0\v\q\s\w\k\i\3\1\g\b\s\9\z\m\f\y\u\y\8\d\x\6\l\y\z\8\5\1\z\h\8\0\h\j\4\b\2\l\w\v\i\y\o\c\n\z\h\1\w\b\k\v\9\1\i\l\w\r\g\i\h\o\3\s\z\8\h\x\e\f\n\3\u\n\u\k\n\w\m\q\j\o\h\g\0\d\7\9\w\x\o\n\i\9\o\p\d\1\m\h\x\3\1\8\4\p\f\v ]] 00:08:54.359 00:08:54.359 real 0m1.337s 00:08:54.359 user 0m0.948s 00:08:54.359 sys 0m0.525s 00:08:54.359 13:57:03 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.359 13:57:03 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:54.616 13:57:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:54.616 13:57:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:54.616 13:57:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:54.616 13:57:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:54.616 13:57:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:54.616 13:57:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:54.616 13:57:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:54.616 13:57:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:54.616 13:57:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:54.616 13:57:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:54.616 13:57:03 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:54.616 [2024-07-25 13:57:03.723660] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:54.616 [2024-07-25 13:57:03.723732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62260 ] 00:08:54.616 { 00:08:54.616 "subsystems": [ 00:08:54.616 { 00:08:54.616 "subsystem": "bdev", 00:08:54.616 "config": [ 00:08:54.616 { 00:08:54.616 "params": { 00:08:54.616 "trtype": "pcie", 00:08:54.616 "traddr": "0000:00:10.0", 00:08:54.616 "name": "Nvme0" 00:08:54.616 }, 00:08:54.616 "method": "bdev_nvme_attach_controller" 00:08:54.616 }, 00:08:54.616 { 00:08:54.616 "method": "bdev_wait_for_examine" 00:08:54.616 } 00:08:54.616 ] 00:08:54.616 } 00:08:54.616 ] 00:08:54.616 } 00:08:54.616 [2024-07-25 13:57:03.862391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.875 [2024-07-25 13:57:03.966817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.875 [2024-07-25 13:57:04.010160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:55.133  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:55.133 00:08:55.133 13:57:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:55.133 ************************************ 00:08:55.133 END TEST spdk_dd_basic_rw 00:08:55.133 ************************************ 00:08:55.133 00:08:55.133 real 0m17.028s 00:08:55.133 user 0m12.380s 00:08:55.133 sys 0m5.802s 00:08:55.133 13:57:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.133 13:57:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:55.133 13:57:04 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:55.133 13:57:04 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.133 13:57:04 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.133 13:57:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:55.133 ************************************ 00:08:55.133 START TEST spdk_dd_posix 00:08:55.133 ************************************ 00:08:55.133 13:57:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:55.391 * Looking for test storage... 00:08:55.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:55.391 13:57:04 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.391 13:57:04 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.391 13:57:04 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.391 13:57:04 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.391 13:57:04 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.391 13:57:04 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.391 13:57:04 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.391 13:57:04 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:55.392 * First test run, liburing in use 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:55.392 ************************************ 00:08:55.392 START TEST dd_flag_append 00:08:55.392 ************************************ 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=fytr9mg19cvtpu2tvv6doup2wmtz6cpx 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=om1h2muavbfi0wt10470pnfdghyqrdvn 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s fytr9mg19cvtpu2tvv6doup2wmtz6cpx 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s om1h2muavbfi0wt10470pnfdghyqrdvn 00:08:55.392 13:57:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:55.392 [2024-07-25 13:57:04.561552] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:55.392 [2024-07-25 13:57:04.561615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62314 ] 00:08:55.650 [2024-07-25 13:57:04.700046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.650 [2024-07-25 13:57:04.804633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.650 [2024-07-25 13:57:04.847818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:55.909  Copying: 32/32 [B] (average 31 kBps) 00:08:55.909 00:08:55.909 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ om1h2muavbfi0wt10470pnfdghyqrdvnfytr9mg19cvtpu2tvv6doup2wmtz6cpx == \o\m\1\h\2\m\u\a\v\b\f\i\0\w\t\1\0\4\7\0\p\n\f\d\g\h\y\q\r\d\v\n\f\y\t\r\9\m\g\1\9\c\v\t\p\u\2\t\v\v\6\d\o\u\p\2\w\m\t\z\6\c\p\x ]] 00:08:55.909 00:08:55.909 real 0m0.570s 00:08:55.910 user 0m0.332s 00:08:55.910 sys 0m0.237s 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:55.910 ************************************ 00:08:55.910 END TEST dd_flag_append 00:08:55.910 ************************************ 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:55.910 ************************************ 00:08:55.910 START TEST dd_flag_directory 00:08:55.910 ************************************ 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:55.910 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:55.910 [2024-07-25 13:57:05.193941] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:55.910 [2024-07-25 13:57:05.194085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62348 ] 00:08:56.169 [2024-07-25 13:57:05.333283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.169 [2024-07-25 13:57:05.435413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.428 [2024-07-25 13:57:05.477924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:56.428 [2024-07-25 13:57:05.505541] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:56.428 [2024-07-25 13:57:05.505678] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:56.428 [2024-07-25 13:57:05.505725] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:56.428 [2024-07-25 13:57:05.598397] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:56.428 13:57:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:56.686 [2024-07-25 13:57:05.742514] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:56.686 [2024-07-25 13:57:05.742646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62359 ] 00:08:56.686 [2024-07-25 13:57:05.880569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.686 [2024-07-25 13:57:05.984725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.945 [2024-07-25 13:57:06.027127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:56.945 [2024-07-25 13:57:06.054629] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:56.945 [2024-07-25 13:57:06.054767] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:56.945 [2024-07-25 13:57:06.054799] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:56.945 [2024-07-25 13:57:06.146007] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:56.945 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:08:56.945 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:56.945 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:08:56.945 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:08:56.945 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:08:56.945 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:56.945 00:08:56.945 real 0m1.106s 00:08:56.945 user 0m0.638s 00:08:56.945 sys 0m0.255s 00:08:56.945 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.945 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:56.945 ************************************ 00:08:56.945 END TEST dd_flag_directory 00:08:56.945 ************************************ 00:08:57.204 13:57:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:57.204 13:57:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.204 13:57:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.204 13:57:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:57.204 ************************************ 00:08:57.204 START TEST dd_flag_nofollow 00:08:57.204 ************************************ 00:08:57.204 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:08:57.204 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:57.204 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:57.204 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:57.204 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:57.204 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:57.204 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:08:57.205 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:57.205 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.205 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.205 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.205 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.205 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.205 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.205 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.205 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:57.205 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:57.205 [2024-07-25 13:57:06.362163] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:57.205 [2024-07-25 13:57:06.362229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62387 ] 00:08:57.205 [2024-07-25 13:57:06.501517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.463 [2024-07-25 13:57:06.606776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.463 [2024-07-25 13:57:06.648967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.464 [2024-07-25 13:57:06.676485] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:57.464 [2024-07-25 13:57:06.676538] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:57.464 [2024-07-25 13:57:06.676547] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:57.464 [2024-07-25 13:57:06.767278] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:57.723 13:57:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:57.723 [2024-07-25 13:57:06.904776] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:57.723 [2024-07-25 13:57:06.904853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62402 ] 00:08:57.987 [2024-07-25 13:57:07.041854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.987 [2024-07-25 13:57:07.140945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.987 [2024-07-25 13:57:07.182132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.987 [2024-07-25 13:57:07.210495] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:57.987 [2024-07-25 13:57:07.210544] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:57.987 [2024-07-25 13:57:07.210554] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:58.256 [2024-07-25 13:57:07.301955] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:58.256 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:08:58.256 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:58.256 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:08:58.256 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:08:58.256 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:08:58.256 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:58.256 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:58.256 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:58.256 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:58.256 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.256 [2024-07-25 13:57:07.448407] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:08:58.256 [2024-07-25 13:57:07.448597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62404 ] 00:08:58.513 [2024-07-25 13:57:07.590807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.513 [2024-07-25 13:57:07.697411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.513 [2024-07-25 13:57:07.739665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:58.771  Copying: 512/512 [B] (average 500 kBps) 00:08:58.771 00:08:58.771 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 9u1lq1bxtnqzzt34kdo9d10vpk2lob8ry6ahkf420w5i81qz35mdgdrnc7bh61jumm0wwbiquiy6crt2fq98y9engyx5teqk48gm0ehfpqn9jz366xj07hav3m3vnnd8z29eflc969iq5l4t5xhlxs51wnnrotev3f5cieyxm077x7gub9qdlaiaos2grvy87euntk704gh20p8n28z6zxicfuio6ctbe5zkhii3rztfcrdu97v725ne33s09q7crxrhzz6d0zk4kbv8d95pa74v9e7mfzgju0bzwpotfitq0khxt4qooyp2x1ngtcqcoyxxh4fr3r8dbrmehmb44skuvjowxqpvx1ud1rgebtasi959l68h3ikos69xp1q9qum2aj1ji97umb241emxdoq9etxhwgb0pw2ymts4o9119f5e35e1qfozyip1yk9fq6l9stfjumgaxcuuiwc7qbh5bcrp8u348yvjzo9uegnpuo9rnzoc62wyfo60hixi == \9\u\1\l\q\1\b\x\t\n\q\z\z\t\3\4\k\d\o\9\d\1\0\v\p\k\2\l\o\b\8\r\y\6\a\h\k\f\4\2\0\w\5\i\8\1\q\z\3\5\m\d\g\d\r\n\c\7\b\h\6\1\j\u\m\m\0\w\w\b\i\q\u\i\y\6\c\r\t\2\f\q\9\8\y\9\e\n\g\y\x\5\t\e\q\k\4\8\g\m\0\e\h\f\p\q\n\9\j\z\3\6\6\x\j\0\7\h\a\v\3\m\3\v\n\n\d\8\z\2\9\e\f\l\c\9\6\9\i\q\5\l\4\t\5\x\h\l\x\s\5\1\w\n\n\r\o\t\e\v\3\f\5\c\i\e\y\x\m\0\7\7\x\7\g\u\b\9\q\d\l\a\i\a\o\s\2\g\r\v\y\8\7\e\u\n\t\k\7\0\4\g\h\2\0\p\8\n\2\8\z\6\z\x\i\c\f\u\i\o\6\c\t\b\e\5\z\k\h\i\i\3\r\z\t\f\c\r\d\u\9\7\v\7\2\5\n\e\3\3\s\0\9\q\7\c\r\x\r\h\z\z\6\d\0\z\k\4\k\b\v\8\d\9\5\p\a\7\4\v\9\e\7\m\f\z\g\j\u\0\b\z\w\p\o\t\f\i\t\q\0\k\h\x\t\4\q\o\o\y\p\2\x\1\n\g\t\c\q\c\o\y\x\x\h\4\f\r\3\r\8\d\b\r\m\e\h\m\b\4\4\s\k\u\v\j\o\w\x\q\p\v\x\1\u\d\1\r\g\e\b\t\a\s\i\9\5\9\l\6\8\h\3\i\k\o\s\6\9\x\p\1\q\9\q\u\m\2\a\j\1\j\i\9\7\u\m\b\2\4\1\e\m\x\d\o\q\9\e\t\x\h\w\g\b\0\p\w\2\y\m\t\s\4\o\9\1\1\9\f\5\e\3\5\e\1\q\f\o\z\y\i\p\1\y\k\9\f\q\6\l\9\s\t\f\j\u\m\g\a\x\c\u\u\i\w\c\7\q\b\h\5\b\c\r\p\8\u\3\4\8\y\v\j\z\o\9\u\e\g\n\p\u\o\9\r\n\z\o\c\6\2\w\y\f\o\6\0\h\i\x\i ]] 00:08:58.771 ************************************ 00:08:58.771 END TEST dd_flag_nofollow 00:08:58.771 ************************************ 00:08:58.771 00:08:58.771 real 0m1.650s 00:08:58.771 user 0m0.974s 00:08:58.771 sys 0m0.468s 00:08:58.771 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.771 13:57:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:58.771 13:57:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:58.771 13:57:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.771 13:57:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.771 13:57:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:58.771 ************************************ 00:08:58.771 START TEST dd_flag_noatime 00:08:58.771 ************************************ 00:08:58.771 13:57:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:08:58.771 13:57:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:58.771 13:57:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:58.771 13:57:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:58.771 13:57:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:58.771 13:57:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:58.771 13:57:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.771 13:57:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721915827 00:08:58.771 13:57:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.771 13:57:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721915827 00:08:58.771 13:57:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:09:00.146 13:57:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:00.146 [2024-07-25 13:57:09.082857] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:00.146 [2024-07-25 13:57:09.083000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62452 ] 00:09:00.146 [2024-07-25 13:57:09.209214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.146 [2024-07-25 13:57:09.312992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.146 [2024-07-25 13:57:09.354880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:00.404  Copying: 512/512 [B] (average 500 kBps) 00:09:00.404 00:09:00.404 13:57:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:00.404 13:57:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721915827 )) 00:09:00.404 13:57:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:00.404 13:57:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721915827 )) 00:09:00.404 13:57:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:00.404 [2024-07-25 13:57:09.633851] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:00.404 [2024-07-25 13:57:09.633937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62462 ] 00:09:00.661 [2024-07-25 13:57:09.772948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.661 [2024-07-25 13:57:09.877840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.661 [2024-07-25 13:57:09.918510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:00.919  Copying: 512/512 [B] (average 500 kBps) 00:09:00.919 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721915829 )) 00:09:00.919 00:09:00.919 real 0m2.133s 00:09:00.919 user 0m0.654s 00:09:00.919 sys 0m0.476s 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:09:00.919 ************************************ 00:09:00.919 END TEST dd_flag_noatime 00:09:00.919 ************************************ 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:00.919 ************************************ 00:09:00.919 START TEST dd_flags_misc 00:09:00.919 ************************************ 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:00.919 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:01.177 [2024-07-25 13:57:10.269013] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:01.177 [2024-07-25 13:57:10.269081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62496 ] 00:09:01.177 [2024-07-25 13:57:10.404066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.436 [2024-07-25 13:57:10.495303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.436 [2024-07-25 13:57:10.534963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:01.436  Copying: 512/512 [B] (average 500 kBps) 00:09:01.436 00:09:01.694 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ q4bt7h2c1k8wrw5sbzycs3361vr6stxfw9wimvkys0xby7o7tp0ts5h9xwd1vgm91e9dralb9fivjmlfbfyi30n3xniwnorqv8ojt7mdw5t01fcajaw1rr91z6k2engy663fta0po8zvrojqbmd3gxu1tn91o77uizsu46ld5s8w46lv4otwykmrj73xeuuh4x541bwxn1aaglly4mj6f3rmje04srwg0ygrgtnbbexfxt60jo7lhjpbdt0tje6r5js2e8viiely1b2ck3qf8w702l1lfzoxlrzfff3yboe3hewkru708fobltg5i5ak0a90fsrq6d4sm90ccbau4mdbaix2qhv2hgzwiv7g20k7obinzxubf71rltpq8dw4hh76xz99r1anxdkylaifht3czfxo2n9t15h3x00d01mznkud9etb4qrywi94cmsfuqzeup0fnoaqkku399y12whnq08tz3fw2tl47eif8d2omlh9g1bald1rdt44cjmh == \q\4\b\t\7\h\2\c\1\k\8\w\r\w\5\s\b\z\y\c\s\3\3\6\1\v\r\6\s\t\x\f\w\9\w\i\m\v\k\y\s\0\x\b\y\7\o\7\t\p\0\t\s\5\h\9\x\w\d\1\v\g\m\9\1\e\9\d\r\a\l\b\9\f\i\v\j\m\l\f\b\f\y\i\3\0\n\3\x\n\i\w\n\o\r\q\v\8\o\j\t\7\m\d\w\5\t\0\1\f\c\a\j\a\w\1\r\r\9\1\z\6\k\2\e\n\g\y\6\6\3\f\t\a\0\p\o\8\z\v\r\o\j\q\b\m\d\3\g\x\u\1\t\n\9\1\o\7\7\u\i\z\s\u\4\6\l\d\5\s\8\w\4\6\l\v\4\o\t\w\y\k\m\r\j\7\3\x\e\u\u\h\4\x\5\4\1\b\w\x\n\1\a\a\g\l\l\y\4\m\j\6\f\3\r\m\j\e\0\4\s\r\w\g\0\y\g\r\g\t\n\b\b\e\x\f\x\t\6\0\j\o\7\l\h\j\p\b\d\t\0\t\j\e\6\r\5\j\s\2\e\8\v\i\i\e\l\y\1\b\2\c\k\3\q\f\8\w\7\0\2\l\1\l\f\z\o\x\l\r\z\f\f\f\3\y\b\o\e\3\h\e\w\k\r\u\7\0\8\f\o\b\l\t\g\5\i\5\a\k\0\a\9\0\f\s\r\q\6\d\4\s\m\9\0\c\c\b\a\u\4\m\d\b\a\i\x\2\q\h\v\2\h\g\z\w\i\v\7\g\2\0\k\7\o\b\i\n\z\x\u\b\f\7\1\r\l\t\p\q\8\d\w\4\h\h\7\6\x\z\9\9\r\1\a\n\x\d\k\y\l\a\i\f\h\t\3\c\z\f\x\o\2\n\9\t\1\5\h\3\x\0\0\d\0\1\m\z\n\k\u\d\9\e\t\b\4\q\r\y\w\i\9\4\c\m\s\f\u\q\z\e\u\p\0\f\n\o\a\q\k\k\u\3\9\9\y\1\2\w\h\n\q\0\8\t\z\3\f\w\2\t\l\4\7\e\i\f\8\d\2\o\m\l\h\9\g\1\b\a\l\d\1\r\d\t\4\4\c\j\m\h ]] 00:09:01.694 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:01.694 13:57:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:01.694 [2024-07-25 13:57:10.777280] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:01.694 [2024-07-25 13:57:10.777350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62506 ] 00:09:01.694 [2024-07-25 13:57:10.901075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.694 [2024-07-25 13:57:10.988946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.952 [2024-07-25 13:57:11.031184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:01.952  Copying: 512/512 [B] (average 500 kBps) 00:09:01.952 00:09:01.952 13:57:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ q4bt7h2c1k8wrw5sbzycs3361vr6stxfw9wimvkys0xby7o7tp0ts5h9xwd1vgm91e9dralb9fivjmlfbfyi30n3xniwnorqv8ojt7mdw5t01fcajaw1rr91z6k2engy663fta0po8zvrojqbmd3gxu1tn91o77uizsu46ld5s8w46lv4otwykmrj73xeuuh4x541bwxn1aaglly4mj6f3rmje04srwg0ygrgtnbbexfxt60jo7lhjpbdt0tje6r5js2e8viiely1b2ck3qf8w702l1lfzoxlrzfff3yboe3hewkru708fobltg5i5ak0a90fsrq6d4sm90ccbau4mdbaix2qhv2hgzwiv7g20k7obinzxubf71rltpq8dw4hh76xz99r1anxdkylaifht3czfxo2n9t15h3x00d01mznkud9etb4qrywi94cmsfuqzeup0fnoaqkku399y12whnq08tz3fw2tl47eif8d2omlh9g1bald1rdt44cjmh == \q\4\b\t\7\h\2\c\1\k\8\w\r\w\5\s\b\z\y\c\s\3\3\6\1\v\r\6\s\t\x\f\w\9\w\i\m\v\k\y\s\0\x\b\y\7\o\7\t\p\0\t\s\5\h\9\x\w\d\1\v\g\m\9\1\e\9\d\r\a\l\b\9\f\i\v\j\m\l\f\b\f\y\i\3\0\n\3\x\n\i\w\n\o\r\q\v\8\o\j\t\7\m\d\w\5\t\0\1\f\c\a\j\a\w\1\r\r\9\1\z\6\k\2\e\n\g\y\6\6\3\f\t\a\0\p\o\8\z\v\r\o\j\q\b\m\d\3\g\x\u\1\t\n\9\1\o\7\7\u\i\z\s\u\4\6\l\d\5\s\8\w\4\6\l\v\4\o\t\w\y\k\m\r\j\7\3\x\e\u\u\h\4\x\5\4\1\b\w\x\n\1\a\a\g\l\l\y\4\m\j\6\f\3\r\m\j\e\0\4\s\r\w\g\0\y\g\r\g\t\n\b\b\e\x\f\x\t\6\0\j\o\7\l\h\j\p\b\d\t\0\t\j\e\6\r\5\j\s\2\e\8\v\i\i\e\l\y\1\b\2\c\k\3\q\f\8\w\7\0\2\l\1\l\f\z\o\x\l\r\z\f\f\f\3\y\b\o\e\3\h\e\w\k\r\u\7\0\8\f\o\b\l\t\g\5\i\5\a\k\0\a\9\0\f\s\r\q\6\d\4\s\m\9\0\c\c\b\a\u\4\m\d\b\a\i\x\2\q\h\v\2\h\g\z\w\i\v\7\g\2\0\k\7\o\b\i\n\z\x\u\b\f\7\1\r\l\t\p\q\8\d\w\4\h\h\7\6\x\z\9\9\r\1\a\n\x\d\k\y\l\a\i\f\h\t\3\c\z\f\x\o\2\n\9\t\1\5\h\3\x\0\0\d\0\1\m\z\n\k\u\d\9\e\t\b\4\q\r\y\w\i\9\4\c\m\s\f\u\q\z\e\u\p\0\f\n\o\a\q\k\k\u\3\9\9\y\1\2\w\h\n\q\0\8\t\z\3\f\w\2\t\l\4\7\e\i\f\8\d\2\o\m\l\h\9\g\1\b\a\l\d\1\r\d\t\4\4\c\j\m\h ]] 00:09:01.952 13:57:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:01.952 13:57:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:02.210 [2024-07-25 13:57:11.297574] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:02.210 [2024-07-25 13:57:11.297731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62515 ] 00:09:02.210 [2024-07-25 13:57:11.435613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.484 [2024-07-25 13:57:11.542565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.484 [2024-07-25 13:57:11.585636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:02.769  Copying: 512/512 [B] (average 100 kBps) 00:09:02.769 00:09:02.769 13:57:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ q4bt7h2c1k8wrw5sbzycs3361vr6stxfw9wimvkys0xby7o7tp0ts5h9xwd1vgm91e9dralb9fivjmlfbfyi30n3xniwnorqv8ojt7mdw5t01fcajaw1rr91z6k2engy663fta0po8zvrojqbmd3gxu1tn91o77uizsu46ld5s8w46lv4otwykmrj73xeuuh4x541bwxn1aaglly4mj6f3rmje04srwg0ygrgtnbbexfxt60jo7lhjpbdt0tje6r5js2e8viiely1b2ck3qf8w702l1lfzoxlrzfff3yboe3hewkru708fobltg5i5ak0a90fsrq6d4sm90ccbau4mdbaix2qhv2hgzwiv7g20k7obinzxubf71rltpq8dw4hh76xz99r1anxdkylaifht3czfxo2n9t15h3x00d01mznkud9etb4qrywi94cmsfuqzeup0fnoaqkku399y12whnq08tz3fw2tl47eif8d2omlh9g1bald1rdt44cjmh == \q\4\b\t\7\h\2\c\1\k\8\w\r\w\5\s\b\z\y\c\s\3\3\6\1\v\r\6\s\t\x\f\w\9\w\i\m\v\k\y\s\0\x\b\y\7\o\7\t\p\0\t\s\5\h\9\x\w\d\1\v\g\m\9\1\e\9\d\r\a\l\b\9\f\i\v\j\m\l\f\b\f\y\i\3\0\n\3\x\n\i\w\n\o\r\q\v\8\o\j\t\7\m\d\w\5\t\0\1\f\c\a\j\a\w\1\r\r\9\1\z\6\k\2\e\n\g\y\6\6\3\f\t\a\0\p\o\8\z\v\r\o\j\q\b\m\d\3\g\x\u\1\t\n\9\1\o\7\7\u\i\z\s\u\4\6\l\d\5\s\8\w\4\6\l\v\4\o\t\w\y\k\m\r\j\7\3\x\e\u\u\h\4\x\5\4\1\b\w\x\n\1\a\a\g\l\l\y\4\m\j\6\f\3\r\m\j\e\0\4\s\r\w\g\0\y\g\r\g\t\n\b\b\e\x\f\x\t\6\0\j\o\7\l\h\j\p\b\d\t\0\t\j\e\6\r\5\j\s\2\e\8\v\i\i\e\l\y\1\b\2\c\k\3\q\f\8\w\7\0\2\l\1\l\f\z\o\x\l\r\z\f\f\f\3\y\b\o\e\3\h\e\w\k\r\u\7\0\8\f\o\b\l\t\g\5\i\5\a\k\0\a\9\0\f\s\r\q\6\d\4\s\m\9\0\c\c\b\a\u\4\m\d\b\a\i\x\2\q\h\v\2\h\g\z\w\i\v\7\g\2\0\k\7\o\b\i\n\z\x\u\b\f\7\1\r\l\t\p\q\8\d\w\4\h\h\7\6\x\z\9\9\r\1\a\n\x\d\k\y\l\a\i\f\h\t\3\c\z\f\x\o\2\n\9\t\1\5\h\3\x\0\0\d\0\1\m\z\n\k\u\d\9\e\t\b\4\q\r\y\w\i\9\4\c\m\s\f\u\q\z\e\u\p\0\f\n\o\a\q\k\k\u\3\9\9\y\1\2\w\h\n\q\0\8\t\z\3\f\w\2\t\l\4\7\e\i\f\8\d\2\o\m\l\h\9\g\1\b\a\l\d\1\r\d\t\4\4\c\j\m\h ]] 00:09:02.769 13:57:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:02.769 13:57:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:02.769 [2024-07-25 13:57:11.860423] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:02.769 [2024-07-25 13:57:11.860496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62525 ] 00:09:02.769 [2024-07-25 13:57:11.999734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.028 [2024-07-25 13:57:12.105786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.028 [2024-07-25 13:57:12.149134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.287  Copying: 512/512 [B] (average 250 kBps) 00:09:03.287 00:09:03.287 13:57:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ q4bt7h2c1k8wrw5sbzycs3361vr6stxfw9wimvkys0xby7o7tp0ts5h9xwd1vgm91e9dralb9fivjmlfbfyi30n3xniwnorqv8ojt7mdw5t01fcajaw1rr91z6k2engy663fta0po8zvrojqbmd3gxu1tn91o77uizsu46ld5s8w46lv4otwykmrj73xeuuh4x541bwxn1aaglly4mj6f3rmje04srwg0ygrgtnbbexfxt60jo7lhjpbdt0tje6r5js2e8viiely1b2ck3qf8w702l1lfzoxlrzfff3yboe3hewkru708fobltg5i5ak0a90fsrq6d4sm90ccbau4mdbaix2qhv2hgzwiv7g20k7obinzxubf71rltpq8dw4hh76xz99r1anxdkylaifht3czfxo2n9t15h3x00d01mznkud9etb4qrywi94cmsfuqzeup0fnoaqkku399y12whnq08tz3fw2tl47eif8d2omlh9g1bald1rdt44cjmh == \q\4\b\t\7\h\2\c\1\k\8\w\r\w\5\s\b\z\y\c\s\3\3\6\1\v\r\6\s\t\x\f\w\9\w\i\m\v\k\y\s\0\x\b\y\7\o\7\t\p\0\t\s\5\h\9\x\w\d\1\v\g\m\9\1\e\9\d\r\a\l\b\9\f\i\v\j\m\l\f\b\f\y\i\3\0\n\3\x\n\i\w\n\o\r\q\v\8\o\j\t\7\m\d\w\5\t\0\1\f\c\a\j\a\w\1\r\r\9\1\z\6\k\2\e\n\g\y\6\6\3\f\t\a\0\p\o\8\z\v\r\o\j\q\b\m\d\3\g\x\u\1\t\n\9\1\o\7\7\u\i\z\s\u\4\6\l\d\5\s\8\w\4\6\l\v\4\o\t\w\y\k\m\r\j\7\3\x\e\u\u\h\4\x\5\4\1\b\w\x\n\1\a\a\g\l\l\y\4\m\j\6\f\3\r\m\j\e\0\4\s\r\w\g\0\y\g\r\g\t\n\b\b\e\x\f\x\t\6\0\j\o\7\l\h\j\p\b\d\t\0\t\j\e\6\r\5\j\s\2\e\8\v\i\i\e\l\y\1\b\2\c\k\3\q\f\8\w\7\0\2\l\1\l\f\z\o\x\l\r\z\f\f\f\3\y\b\o\e\3\h\e\w\k\r\u\7\0\8\f\o\b\l\t\g\5\i\5\a\k\0\a\9\0\f\s\r\q\6\d\4\s\m\9\0\c\c\b\a\u\4\m\d\b\a\i\x\2\q\h\v\2\h\g\z\w\i\v\7\g\2\0\k\7\o\b\i\n\z\x\u\b\f\7\1\r\l\t\p\q\8\d\w\4\h\h\7\6\x\z\9\9\r\1\a\n\x\d\k\y\l\a\i\f\h\t\3\c\z\f\x\o\2\n\9\t\1\5\h\3\x\0\0\d\0\1\m\z\n\k\u\d\9\e\t\b\4\q\r\y\w\i\9\4\c\m\s\f\u\q\z\e\u\p\0\f\n\o\a\q\k\k\u\3\9\9\y\1\2\w\h\n\q\0\8\t\z\3\f\w\2\t\l\4\7\e\i\f\8\d\2\o\m\l\h\9\g\1\b\a\l\d\1\r\d\t\4\4\c\j\m\h ]] 00:09:03.287 13:57:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:03.287 13:57:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:03.287 13:57:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:03.287 13:57:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:03.287 13:57:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:03.287 13:57:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:03.287 [2024-07-25 13:57:12.428610] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:03.287 [2024-07-25 13:57:12.428681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62534 ] 00:09:03.287 [2024-07-25 13:57:12.555678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.545 [2024-07-25 13:57:12.676580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.545 [2024-07-25 13:57:12.718341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.804  Copying: 512/512 [B] (average 500 kBps) 00:09:03.804 00:09:03.804 13:57:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ a6d9fqch1e7cy14awc487320l61y1lynre5150i990ha03act49s9231pkrx7bi4u0t8ovu3fanf43684109l91l7975feec2g5ny77k5bxpkz671x6jpiwbaspwq9xl6z2eq2qd4bnwyyzk2lgc9afmwy83t1mo5f0hgaqmm2d1r0xa7enqmesrsse392p5sydaj9fxnqzt8i6fgukijxpoqmxmqep79031iy7jjtjnrd4ge70gw7e5nq3v3ohlgtz9y8eovwihfts0lwmgayrxso9o03usfljw3suubcnk1rhtueu8b1r9c9pa4d6m9bpi318vksgwu8jnbrdw31a6hfo50jova997own2gkm7c3iyn31n2hlg7i1nbsugjddizstrh681tgtk8s21nxok4bb4fcdrt6fx2rv3m1nlh480vspab68fl1m8twvlphd2ugbrsg347t4fybxxfnd688pzj6jm2nueup0aptlgs40jxw2fds7iyrjesk18 == \a\6\d\9\f\q\c\h\1\e\7\c\y\1\4\a\w\c\4\8\7\3\2\0\l\6\1\y\1\l\y\n\r\e\5\1\5\0\i\9\9\0\h\a\0\3\a\c\t\4\9\s\9\2\3\1\p\k\r\x\7\b\i\4\u\0\t\8\o\v\u\3\f\a\n\f\4\3\6\8\4\1\0\9\l\9\1\l\7\9\7\5\f\e\e\c\2\g\5\n\y\7\7\k\5\b\x\p\k\z\6\7\1\x\6\j\p\i\w\b\a\s\p\w\q\9\x\l\6\z\2\e\q\2\q\d\4\b\n\w\y\y\z\k\2\l\g\c\9\a\f\m\w\y\8\3\t\1\m\o\5\f\0\h\g\a\q\m\m\2\d\1\r\0\x\a\7\e\n\q\m\e\s\r\s\s\e\3\9\2\p\5\s\y\d\a\j\9\f\x\n\q\z\t\8\i\6\f\g\u\k\i\j\x\p\o\q\m\x\m\q\e\p\7\9\0\3\1\i\y\7\j\j\t\j\n\r\d\4\g\e\7\0\g\w\7\e\5\n\q\3\v\3\o\h\l\g\t\z\9\y\8\e\o\v\w\i\h\f\t\s\0\l\w\m\g\a\y\r\x\s\o\9\o\0\3\u\s\f\l\j\w\3\s\u\u\b\c\n\k\1\r\h\t\u\e\u\8\b\1\r\9\c\9\p\a\4\d\6\m\9\b\p\i\3\1\8\v\k\s\g\w\u\8\j\n\b\r\d\w\3\1\a\6\h\f\o\5\0\j\o\v\a\9\9\7\o\w\n\2\g\k\m\7\c\3\i\y\n\3\1\n\2\h\l\g\7\i\1\n\b\s\u\g\j\d\d\i\z\s\t\r\h\6\8\1\t\g\t\k\8\s\2\1\n\x\o\k\4\b\b\4\f\c\d\r\t\6\f\x\2\r\v\3\m\1\n\l\h\4\8\0\v\s\p\a\b\6\8\f\l\1\m\8\t\w\v\l\p\h\d\2\u\g\b\r\s\g\3\4\7\t\4\f\y\b\x\x\f\n\d\6\8\8\p\z\j\6\j\m\2\n\u\e\u\p\0\a\p\t\l\g\s\4\0\j\x\w\2\f\d\s\7\i\y\r\j\e\s\k\1\8 ]] 00:09:03.804 13:57:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:03.804 13:57:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:03.804 [2024-07-25 13:57:12.986684] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:03.804 [2024-07-25 13:57:12.986758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62544 ] 00:09:04.061 [2024-07-25 13:57:13.114462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.061 [2024-07-25 13:57:13.215741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.061 [2024-07-25 13:57:13.257363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:04.318  Copying: 512/512 [B] (average 500 kBps) 00:09:04.318 00:09:04.318 13:57:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ a6d9fqch1e7cy14awc487320l61y1lynre5150i990ha03act49s9231pkrx7bi4u0t8ovu3fanf43684109l91l7975feec2g5ny77k5bxpkz671x6jpiwbaspwq9xl6z2eq2qd4bnwyyzk2lgc9afmwy83t1mo5f0hgaqmm2d1r0xa7enqmesrsse392p5sydaj9fxnqzt8i6fgukijxpoqmxmqep79031iy7jjtjnrd4ge70gw7e5nq3v3ohlgtz9y8eovwihfts0lwmgayrxso9o03usfljw3suubcnk1rhtueu8b1r9c9pa4d6m9bpi318vksgwu8jnbrdw31a6hfo50jova997own2gkm7c3iyn31n2hlg7i1nbsugjddizstrh681tgtk8s21nxok4bb4fcdrt6fx2rv3m1nlh480vspab68fl1m8twvlphd2ugbrsg347t4fybxxfnd688pzj6jm2nueup0aptlgs40jxw2fds7iyrjesk18 == \a\6\d\9\f\q\c\h\1\e\7\c\y\1\4\a\w\c\4\8\7\3\2\0\l\6\1\y\1\l\y\n\r\e\5\1\5\0\i\9\9\0\h\a\0\3\a\c\t\4\9\s\9\2\3\1\p\k\r\x\7\b\i\4\u\0\t\8\o\v\u\3\f\a\n\f\4\3\6\8\4\1\0\9\l\9\1\l\7\9\7\5\f\e\e\c\2\g\5\n\y\7\7\k\5\b\x\p\k\z\6\7\1\x\6\j\p\i\w\b\a\s\p\w\q\9\x\l\6\z\2\e\q\2\q\d\4\b\n\w\y\y\z\k\2\l\g\c\9\a\f\m\w\y\8\3\t\1\m\o\5\f\0\h\g\a\q\m\m\2\d\1\r\0\x\a\7\e\n\q\m\e\s\r\s\s\e\3\9\2\p\5\s\y\d\a\j\9\f\x\n\q\z\t\8\i\6\f\g\u\k\i\j\x\p\o\q\m\x\m\q\e\p\7\9\0\3\1\i\y\7\j\j\t\j\n\r\d\4\g\e\7\0\g\w\7\e\5\n\q\3\v\3\o\h\l\g\t\z\9\y\8\e\o\v\w\i\h\f\t\s\0\l\w\m\g\a\y\r\x\s\o\9\o\0\3\u\s\f\l\j\w\3\s\u\u\b\c\n\k\1\r\h\t\u\e\u\8\b\1\r\9\c\9\p\a\4\d\6\m\9\b\p\i\3\1\8\v\k\s\g\w\u\8\j\n\b\r\d\w\3\1\a\6\h\f\o\5\0\j\o\v\a\9\9\7\o\w\n\2\g\k\m\7\c\3\i\y\n\3\1\n\2\h\l\g\7\i\1\n\b\s\u\g\j\d\d\i\z\s\t\r\h\6\8\1\t\g\t\k\8\s\2\1\n\x\o\k\4\b\b\4\f\c\d\r\t\6\f\x\2\r\v\3\m\1\n\l\h\4\8\0\v\s\p\a\b\6\8\f\l\1\m\8\t\w\v\l\p\h\d\2\u\g\b\r\s\g\3\4\7\t\4\f\y\b\x\x\f\n\d\6\8\8\p\z\j\6\j\m\2\n\u\e\u\p\0\a\p\t\l\g\s\4\0\j\x\w\2\f\d\s\7\i\y\r\j\e\s\k\1\8 ]] 00:09:04.318 13:57:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:04.318 13:57:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:04.318 [2024-07-25 13:57:13.509952] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:04.318 [2024-07-25 13:57:13.510019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62553 ] 00:09:04.576 [2024-07-25 13:57:13.648213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.576 [2024-07-25 13:57:13.746633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.576 [2024-07-25 13:57:13.789817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:04.833  Copying: 512/512 [B] (average 100 kBps) 00:09:04.833 00:09:04.834 13:57:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ a6d9fqch1e7cy14awc487320l61y1lynre5150i990ha03act49s9231pkrx7bi4u0t8ovu3fanf43684109l91l7975feec2g5ny77k5bxpkz671x6jpiwbaspwq9xl6z2eq2qd4bnwyyzk2lgc9afmwy83t1mo5f0hgaqmm2d1r0xa7enqmesrsse392p5sydaj9fxnqzt8i6fgukijxpoqmxmqep79031iy7jjtjnrd4ge70gw7e5nq3v3ohlgtz9y8eovwihfts0lwmgayrxso9o03usfljw3suubcnk1rhtueu8b1r9c9pa4d6m9bpi318vksgwu8jnbrdw31a6hfo50jova997own2gkm7c3iyn31n2hlg7i1nbsugjddizstrh681tgtk8s21nxok4bb4fcdrt6fx2rv3m1nlh480vspab68fl1m8twvlphd2ugbrsg347t4fybxxfnd688pzj6jm2nueup0aptlgs40jxw2fds7iyrjesk18 == \a\6\d\9\f\q\c\h\1\e\7\c\y\1\4\a\w\c\4\8\7\3\2\0\l\6\1\y\1\l\y\n\r\e\5\1\5\0\i\9\9\0\h\a\0\3\a\c\t\4\9\s\9\2\3\1\p\k\r\x\7\b\i\4\u\0\t\8\o\v\u\3\f\a\n\f\4\3\6\8\4\1\0\9\l\9\1\l\7\9\7\5\f\e\e\c\2\g\5\n\y\7\7\k\5\b\x\p\k\z\6\7\1\x\6\j\p\i\w\b\a\s\p\w\q\9\x\l\6\z\2\e\q\2\q\d\4\b\n\w\y\y\z\k\2\l\g\c\9\a\f\m\w\y\8\3\t\1\m\o\5\f\0\h\g\a\q\m\m\2\d\1\r\0\x\a\7\e\n\q\m\e\s\r\s\s\e\3\9\2\p\5\s\y\d\a\j\9\f\x\n\q\z\t\8\i\6\f\g\u\k\i\j\x\p\o\q\m\x\m\q\e\p\7\9\0\3\1\i\y\7\j\j\t\j\n\r\d\4\g\e\7\0\g\w\7\e\5\n\q\3\v\3\o\h\l\g\t\z\9\y\8\e\o\v\w\i\h\f\t\s\0\l\w\m\g\a\y\r\x\s\o\9\o\0\3\u\s\f\l\j\w\3\s\u\u\b\c\n\k\1\r\h\t\u\e\u\8\b\1\r\9\c\9\p\a\4\d\6\m\9\b\p\i\3\1\8\v\k\s\g\w\u\8\j\n\b\r\d\w\3\1\a\6\h\f\o\5\0\j\o\v\a\9\9\7\o\w\n\2\g\k\m\7\c\3\i\y\n\3\1\n\2\h\l\g\7\i\1\n\b\s\u\g\j\d\d\i\z\s\t\r\h\6\8\1\t\g\t\k\8\s\2\1\n\x\o\k\4\b\b\4\f\c\d\r\t\6\f\x\2\r\v\3\m\1\n\l\h\4\8\0\v\s\p\a\b\6\8\f\l\1\m\8\t\w\v\l\p\h\d\2\u\g\b\r\s\g\3\4\7\t\4\f\y\b\x\x\f\n\d\6\8\8\p\z\j\6\j\m\2\n\u\e\u\p\0\a\p\t\l\g\s\4\0\j\x\w\2\f\d\s\7\i\y\r\j\e\s\k\1\8 ]] 00:09:04.834 13:57:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:04.834 13:57:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:04.834 [2024-07-25 13:57:14.054783] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:04.834 [2024-07-25 13:57:14.054857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62563 ] 00:09:05.091 [2024-07-25 13:57:14.178741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.091 [2024-07-25 13:57:14.292406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.091 [2024-07-25 13:57:14.334702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:05.350  Copying: 512/512 [B] (average 125 kBps) 00:09:05.350 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ a6d9fqch1e7cy14awc487320l61y1lynre5150i990ha03act49s9231pkrx7bi4u0t8ovu3fanf43684109l91l7975feec2g5ny77k5bxpkz671x6jpiwbaspwq9xl6z2eq2qd4bnwyyzk2lgc9afmwy83t1mo5f0hgaqmm2d1r0xa7enqmesrsse392p5sydaj9fxnqzt8i6fgukijxpoqmxmqep79031iy7jjtjnrd4ge70gw7e5nq3v3ohlgtz9y8eovwihfts0lwmgayrxso9o03usfljw3suubcnk1rhtueu8b1r9c9pa4d6m9bpi318vksgwu8jnbrdw31a6hfo50jova997own2gkm7c3iyn31n2hlg7i1nbsugjddizstrh681tgtk8s21nxok4bb4fcdrt6fx2rv3m1nlh480vspab68fl1m8twvlphd2ugbrsg347t4fybxxfnd688pzj6jm2nueup0aptlgs40jxw2fds7iyrjesk18 == \a\6\d\9\f\q\c\h\1\e\7\c\y\1\4\a\w\c\4\8\7\3\2\0\l\6\1\y\1\l\y\n\r\e\5\1\5\0\i\9\9\0\h\a\0\3\a\c\t\4\9\s\9\2\3\1\p\k\r\x\7\b\i\4\u\0\t\8\o\v\u\3\f\a\n\f\4\3\6\8\4\1\0\9\l\9\1\l\7\9\7\5\f\e\e\c\2\g\5\n\y\7\7\k\5\b\x\p\k\z\6\7\1\x\6\j\p\i\w\b\a\s\p\w\q\9\x\l\6\z\2\e\q\2\q\d\4\b\n\w\y\y\z\k\2\l\g\c\9\a\f\m\w\y\8\3\t\1\m\o\5\f\0\h\g\a\q\m\m\2\d\1\r\0\x\a\7\e\n\q\m\e\s\r\s\s\e\3\9\2\p\5\s\y\d\a\j\9\f\x\n\q\z\t\8\i\6\f\g\u\k\i\j\x\p\o\q\m\x\m\q\e\p\7\9\0\3\1\i\y\7\j\j\t\j\n\r\d\4\g\e\7\0\g\w\7\e\5\n\q\3\v\3\o\h\l\g\t\z\9\y\8\e\o\v\w\i\h\f\t\s\0\l\w\m\g\a\y\r\x\s\o\9\o\0\3\u\s\f\l\j\w\3\s\u\u\b\c\n\k\1\r\h\t\u\e\u\8\b\1\r\9\c\9\p\a\4\d\6\m\9\b\p\i\3\1\8\v\k\s\g\w\u\8\j\n\b\r\d\w\3\1\a\6\h\f\o\5\0\j\o\v\a\9\9\7\o\w\n\2\g\k\m\7\c\3\i\y\n\3\1\n\2\h\l\g\7\i\1\n\b\s\u\g\j\d\d\i\z\s\t\r\h\6\8\1\t\g\t\k\8\s\2\1\n\x\o\k\4\b\b\4\f\c\d\r\t\6\f\x\2\r\v\3\m\1\n\l\h\4\8\0\v\s\p\a\b\6\8\f\l\1\m\8\t\w\v\l\p\h\d\2\u\g\b\r\s\g\3\4\7\t\4\f\y\b\x\x\f\n\d\6\8\8\p\z\j\6\j\m\2\n\u\e\u\p\0\a\p\t\l\g\s\4\0\j\x\w\2\f\d\s\7\i\y\r\j\e\s\k\1\8 ]] 00:09:05.350 00:09:05.350 real 0m4.351s 00:09:05.350 user 0m2.540s 00:09:05.350 sys 0m1.822s 00:09:05.350 ************************************ 00:09:05.350 END TEST dd_flags_misc 00:09:05.350 ************************************ 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:09:05.350 * Second test run, disabling liburing, forcing AIO 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:05.350 ************************************ 00:09:05.350 START TEST dd_flag_append_forced_aio 00:09:05.350 ************************************ 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=opyscxquf9o1k6mpsg2juklpsn8n6dgs 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=f70a82obgjswdyv0p5t560nwxsz4qs2p 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s opyscxquf9o1k6mpsg2juklpsn8n6dgs 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s f70a82obgjswdyv0p5t560nwxsz4qs2p 00:09:05.350 13:57:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:05.607 [2024-07-25 13:57:14.663579] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:05.608 [2024-07-25 13:57:14.663644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62591 ] 00:09:05.608 [2024-07-25 13:57:14.800161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.608 [2024-07-25 13:57:14.907339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.865 [2024-07-25 13:57:14.950082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.123  Copying: 32/32 [B] (average 31 kBps) 00:09:06.123 00:09:06.123 ************************************ 00:09:06.123 END TEST dd_flag_append_forced_aio 00:09:06.123 ************************************ 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ f70a82obgjswdyv0p5t560nwxsz4qs2popyscxquf9o1k6mpsg2juklpsn8n6dgs == \f\7\0\a\8\2\o\b\g\j\s\w\d\y\v\0\p\5\t\5\6\0\n\w\x\s\z\4\q\s\2\p\o\p\y\s\c\x\q\u\f\9\o\1\k\6\m\p\s\g\2\j\u\k\l\p\s\n\8\n\6\d\g\s ]] 00:09:06.124 00:09:06.124 real 0m0.595s 00:09:06.124 user 0m0.329s 00:09:06.124 sys 0m0.134s 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:06.124 ************************************ 00:09:06.124 START TEST dd_flag_directory_forced_aio 00:09:06.124 ************************************ 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:06.124 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:06.124 [2024-07-25 13:57:15.310938] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:06.124 [2024-07-25 13:57:15.311012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62618 ] 00:09:06.382 [2024-07-25 13:57:15.448893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.382 [2024-07-25 13:57:15.552780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.382 [2024-07-25 13:57:15.594611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.382 [2024-07-25 13:57:15.622212] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:06.382 [2024-07-25 13:57:15.622254] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:06.382 [2024-07-25 13:57:15.622264] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.640 [2024-07-25 13:57:15.716540] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:06.640 13:57:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:06.640 [2024-07-25 13:57:15.873721] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:06.640 [2024-07-25 13:57:15.873868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62628 ] 00:09:06.897 [2024-07-25 13:57:16.012967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.897 [2024-07-25 13:57:16.118414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.897 [2024-07-25 13:57:16.161057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.897 [2024-07-25 13:57:16.189971] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:06.897 [2024-07-25 13:57:16.190021] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:06.897 [2024-07-25 13:57:16.190029] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:07.155 [2024-07-25 13:57:16.282704] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:07.155 00:09:07.155 real 0m1.123s 00:09:07.155 user 0m0.656s 00:09:07.155 sys 0m0.255s 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:07.155 ************************************ 00:09:07.155 END TEST dd_flag_directory_forced_aio 00:09:07.155 ************************************ 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:07.155 ************************************ 00:09:07.155 START TEST dd_flag_nofollow_forced_aio 00:09:07.155 ************************************ 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:07.155 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:07.156 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:07.413 [2024-07-25 13:57:16.489975] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:07.413 [2024-07-25 13:57:16.490037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62661 ] 00:09:07.413 [2024-07-25 13:57:16.613444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.413 [2024-07-25 13:57:16.713484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.710 [2024-07-25 13:57:16.755443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:07.710 [2024-07-25 13:57:16.783099] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:07.710 [2024-07-25 13:57:16.783143] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:07.710 [2024-07-25 13:57:16.783153] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:07.710 [2024-07-25 13:57:16.875929] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:07.710 13:57:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:07.967 [2024-07-25 13:57:17.021700] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:07.967 [2024-07-25 13:57:17.021778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62671 ] 00:09:07.967 [2024-07-25 13:57:17.160389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.967 [2024-07-25 13:57:17.266231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.225 [2024-07-25 13:57:17.308324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:08.225 [2024-07-25 13:57:17.337049] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:08.225 [2024-07-25 13:57:17.337111] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:08.225 [2024-07-25 13:57:17.337132] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:08.225 [2024-07-25 13:57:17.430635] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:08.225 13:57:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:09:08.225 13:57:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:08.225 13:57:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:09:08.225 13:57:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:09:08.225 13:57:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:09:08.225 13:57:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:08.225 13:57:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:09:08.225 13:57:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:08.225 13:57:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:08.482 13:57:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:08.482 [2024-07-25 13:57:17.584443] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:08.482 [2024-07-25 13:57:17.584606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62678 ] 00:09:08.483 [2024-07-25 13:57:17.710810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.740 [2024-07-25 13:57:17.817057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.740 [2024-07-25 13:57:17.861260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:08.997  Copying: 512/512 [B] (average 500 kBps) 00:09:08.997 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ s22wda75r3r4ktlaelyl98hgcb7d943uvvhtr59t0uxcxr3nbnmvbkzhl2uxfr5m6oy1ej1vkp5kv6cpr2ae7eedpboe5lo7vl1eh4k3wx69coknzh6bvktjg1eqri93weu63smoqzdxny5j8vpestx7wlhnjwi29lo4twr27zidaiq6ipx06twrpqvucvf2y8ber6zvpydthu7odih57tfbk1xp3fhexsbpccfed3m2s1rnf5iynvkjp87rgr3fg5os4djz8zfwjwsp1owfkjeqe3tcqrespqyul413dac5m7j0ujuvrj1yaajl9824sfkm7vv5efy0ydt09dy7fvvycvd4hrlvzzvlr45u60a0tpk6u1qmg4e3iw7xr0g8t558sjy273wmnbzmmoe4p29jpth29buaztk6exyt4cx4x72fa5dh73vnox3saa0ifjxrmpafheyrpl3niz1wkzbedtaoybclnfbeeirntqi4cz5gmvp3onk68l91vvp2 == \s\2\2\w\d\a\7\5\r\3\r\4\k\t\l\a\e\l\y\l\9\8\h\g\c\b\7\d\9\4\3\u\v\v\h\t\r\5\9\t\0\u\x\c\x\r\3\n\b\n\m\v\b\k\z\h\l\2\u\x\f\r\5\m\6\o\y\1\e\j\1\v\k\p\5\k\v\6\c\p\r\2\a\e\7\e\e\d\p\b\o\e\5\l\o\7\v\l\1\e\h\4\k\3\w\x\6\9\c\o\k\n\z\h\6\b\v\k\t\j\g\1\e\q\r\i\9\3\w\e\u\6\3\s\m\o\q\z\d\x\n\y\5\j\8\v\p\e\s\t\x\7\w\l\h\n\j\w\i\2\9\l\o\4\t\w\r\2\7\z\i\d\a\i\q\6\i\p\x\0\6\t\w\r\p\q\v\u\c\v\f\2\y\8\b\e\r\6\z\v\p\y\d\t\h\u\7\o\d\i\h\5\7\t\f\b\k\1\x\p\3\f\h\e\x\s\b\p\c\c\f\e\d\3\m\2\s\1\r\n\f\5\i\y\n\v\k\j\p\8\7\r\g\r\3\f\g\5\o\s\4\d\j\z\8\z\f\w\j\w\s\p\1\o\w\f\k\j\e\q\e\3\t\c\q\r\e\s\p\q\y\u\l\4\1\3\d\a\c\5\m\7\j\0\u\j\u\v\r\j\1\y\a\a\j\l\9\8\2\4\s\f\k\m\7\v\v\5\e\f\y\0\y\d\t\0\9\d\y\7\f\v\v\y\c\v\d\4\h\r\l\v\z\z\v\l\r\4\5\u\6\0\a\0\t\p\k\6\u\1\q\m\g\4\e\3\i\w\7\x\r\0\g\8\t\5\5\8\s\j\y\2\7\3\w\m\n\b\z\m\m\o\e\4\p\2\9\j\p\t\h\2\9\b\u\a\z\t\k\6\e\x\y\t\4\c\x\4\x\7\2\f\a\5\d\h\7\3\v\n\o\x\3\s\a\a\0\i\f\j\x\r\m\p\a\f\h\e\y\r\p\l\3\n\i\z\1\w\k\z\b\e\d\t\a\o\y\b\c\l\n\f\b\e\e\i\r\n\t\q\i\4\c\z\5\g\m\v\p\3\o\n\k\6\8\l\9\1\v\v\p\2 ]] 00:09:08.997 00:09:08.997 real 0m1.676s 00:09:08.997 user 0m0.983s 00:09:08.997 sys 0m0.365s 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:08.997 ************************************ 00:09:08.997 END TEST dd_flag_nofollow_forced_aio 00:09:08.997 ************************************ 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:08.997 ************************************ 00:09:08.997 START TEST dd_flag_noatime_forced_aio 00:09:08.997 ************************************ 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:08.997 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:08.998 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721915837 00:09:08.998 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:08.998 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721915838 00:09:08.998 13:57:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:09:09.929 13:57:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:09.929 [2024-07-25 13:57:19.213345] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:09.929 [2024-07-25 13:57:19.213537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62719 ] 00:09:10.224 [2024-07-25 13:57:19.351374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.224 [2024-07-25 13:57:19.453935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.224 [2024-07-25 13:57:19.494965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:10.483  Copying: 512/512 [B] (average 500 kBps) 00:09:10.483 00:09:10.483 13:57:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:10.483 13:57:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721915837 )) 00:09:10.483 13:57:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:10.483 13:57:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721915838 )) 00:09:10.483 13:57:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:10.741 [2024-07-25 13:57:19.793449] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:10.741 [2024-07-25 13:57:19.793621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62730 ] 00:09:10.741 [2024-07-25 13:57:19.928887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.741 [2024-07-25 13:57:20.036650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.000 [2024-07-25 13:57:20.079062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.000  Copying: 512/512 [B] (average 500 kBps) 00:09:11.000 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721915840 )) 00:09:11.260 00:09:11.260 real 0m2.182s 00:09:11.260 user 0m0.670s 00:09:11.260 sys 0m0.272s 00:09:11.260 ************************************ 00:09:11.260 END TEST dd_flag_noatime_forced_aio 00:09:11.260 ************************************ 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:11.260 ************************************ 00:09:11.260 START TEST dd_flags_misc_forced_aio 00:09:11.260 ************************************ 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:11.260 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:11.260 [2024-07-25 13:57:20.458577] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:11.260 [2024-07-25 13:57:20.458753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62757 ] 00:09:11.519 [2024-07-25 13:57:20.597908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.519 [2024-07-25 13:57:20.705643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.519 [2024-07-25 13:57:20.747966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.779  Copying: 512/512 [B] (average 500 kBps) 00:09:11.779 00:09:11.779 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ e01zm6hukjohx1eti6bzzt17sd4hkdzu69etsh78t8y1rbn5zpgj9n2xsd3jpl49wmc89dzi2vc8ty18jc06gc28cis981586rb5p4kc6lphzr4z2zl0b2udj6ubmw7i762m1qlzeftqtu6rmx8cs1cyawzca77d1wmbiux1ocdpic8vjqph4vmdi7jr515osqgz485mwvjpljkiufkop4akfs9iw5dl32bgp52uvbae3n2uv1t9o6olbt2kpme61bur2dhb2lu7dzyw2des6c7x0b9vpdsuyjfqlxsimfaqidm3ej22kk7w5mn4c3qu01ih0411qq9mtrxrkq5f8hicgeo8kncscacu9ov2t3qkokv6hle23owqx6t2tkor8vcy7z64bckmccb7fo7bnnlcybeu49tdpx1s8oatwgzekevomhbqhkbcn0f2aklgq0g3kq2g63jvdv81rnx24mkrrvs26d0erumzo4k06w3wcd9tjotpekb6gphp2b60 == \e\0\1\z\m\6\h\u\k\j\o\h\x\1\e\t\i\6\b\z\z\t\1\7\s\d\4\h\k\d\z\u\6\9\e\t\s\h\7\8\t\8\y\1\r\b\n\5\z\p\g\j\9\n\2\x\s\d\3\j\p\l\4\9\w\m\c\8\9\d\z\i\2\v\c\8\t\y\1\8\j\c\0\6\g\c\2\8\c\i\s\9\8\1\5\8\6\r\b\5\p\4\k\c\6\l\p\h\z\r\4\z\2\z\l\0\b\2\u\d\j\6\u\b\m\w\7\i\7\6\2\m\1\q\l\z\e\f\t\q\t\u\6\r\m\x\8\c\s\1\c\y\a\w\z\c\a\7\7\d\1\w\m\b\i\u\x\1\o\c\d\p\i\c\8\v\j\q\p\h\4\v\m\d\i\7\j\r\5\1\5\o\s\q\g\z\4\8\5\m\w\v\j\p\l\j\k\i\u\f\k\o\p\4\a\k\f\s\9\i\w\5\d\l\3\2\b\g\p\5\2\u\v\b\a\e\3\n\2\u\v\1\t\9\o\6\o\l\b\t\2\k\p\m\e\6\1\b\u\r\2\d\h\b\2\l\u\7\d\z\y\w\2\d\e\s\6\c\7\x\0\b\9\v\p\d\s\u\y\j\f\q\l\x\s\i\m\f\a\q\i\d\m\3\e\j\2\2\k\k\7\w\5\m\n\4\c\3\q\u\0\1\i\h\0\4\1\1\q\q\9\m\t\r\x\r\k\q\5\f\8\h\i\c\g\e\o\8\k\n\c\s\c\a\c\u\9\o\v\2\t\3\q\k\o\k\v\6\h\l\e\2\3\o\w\q\x\6\t\2\t\k\o\r\8\v\c\y\7\z\6\4\b\c\k\m\c\c\b\7\f\o\7\b\n\n\l\c\y\b\e\u\4\9\t\d\p\x\1\s\8\o\a\t\w\g\z\e\k\e\v\o\m\h\b\q\h\k\b\c\n\0\f\2\a\k\l\g\q\0\g\3\k\q\2\g\6\3\j\v\d\v\8\1\r\n\x\2\4\m\k\r\r\v\s\2\6\d\0\e\r\u\m\z\o\4\k\0\6\w\3\w\c\d\9\t\j\o\t\p\e\k\b\6\g\p\h\p\2\b\6\0 ]] 00:09:11.779 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:11.779 13:57:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:11.779 [2024-07-25 13:57:21.033031] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:11.779 [2024-07-25 13:57:21.033115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62770 ] 00:09:12.037 [2024-07-25 13:57:21.172020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.037 [2024-07-25 13:57:21.277661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.037 [2024-07-25 13:57:21.318795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:12.296  Copying: 512/512 [B] (average 500 kBps) 00:09:12.296 00:09:12.296 13:57:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ e01zm6hukjohx1eti6bzzt17sd4hkdzu69etsh78t8y1rbn5zpgj9n2xsd3jpl49wmc89dzi2vc8ty18jc06gc28cis981586rb5p4kc6lphzr4z2zl0b2udj6ubmw7i762m1qlzeftqtu6rmx8cs1cyawzca77d1wmbiux1ocdpic8vjqph4vmdi7jr515osqgz485mwvjpljkiufkop4akfs9iw5dl32bgp52uvbae3n2uv1t9o6olbt2kpme61bur2dhb2lu7dzyw2des6c7x0b9vpdsuyjfqlxsimfaqidm3ej22kk7w5mn4c3qu01ih0411qq9mtrxrkq5f8hicgeo8kncscacu9ov2t3qkokv6hle23owqx6t2tkor8vcy7z64bckmccb7fo7bnnlcybeu49tdpx1s8oatwgzekevomhbqhkbcn0f2aklgq0g3kq2g63jvdv81rnx24mkrrvs26d0erumzo4k06w3wcd9tjotpekb6gphp2b60 == \e\0\1\z\m\6\h\u\k\j\o\h\x\1\e\t\i\6\b\z\z\t\1\7\s\d\4\h\k\d\z\u\6\9\e\t\s\h\7\8\t\8\y\1\r\b\n\5\z\p\g\j\9\n\2\x\s\d\3\j\p\l\4\9\w\m\c\8\9\d\z\i\2\v\c\8\t\y\1\8\j\c\0\6\g\c\2\8\c\i\s\9\8\1\5\8\6\r\b\5\p\4\k\c\6\l\p\h\z\r\4\z\2\z\l\0\b\2\u\d\j\6\u\b\m\w\7\i\7\6\2\m\1\q\l\z\e\f\t\q\t\u\6\r\m\x\8\c\s\1\c\y\a\w\z\c\a\7\7\d\1\w\m\b\i\u\x\1\o\c\d\p\i\c\8\v\j\q\p\h\4\v\m\d\i\7\j\r\5\1\5\o\s\q\g\z\4\8\5\m\w\v\j\p\l\j\k\i\u\f\k\o\p\4\a\k\f\s\9\i\w\5\d\l\3\2\b\g\p\5\2\u\v\b\a\e\3\n\2\u\v\1\t\9\o\6\o\l\b\t\2\k\p\m\e\6\1\b\u\r\2\d\h\b\2\l\u\7\d\z\y\w\2\d\e\s\6\c\7\x\0\b\9\v\p\d\s\u\y\j\f\q\l\x\s\i\m\f\a\q\i\d\m\3\e\j\2\2\k\k\7\w\5\m\n\4\c\3\q\u\0\1\i\h\0\4\1\1\q\q\9\m\t\r\x\r\k\q\5\f\8\h\i\c\g\e\o\8\k\n\c\s\c\a\c\u\9\o\v\2\t\3\q\k\o\k\v\6\h\l\e\2\3\o\w\q\x\6\t\2\t\k\o\r\8\v\c\y\7\z\6\4\b\c\k\m\c\c\b\7\f\o\7\b\n\n\l\c\y\b\e\u\4\9\t\d\p\x\1\s\8\o\a\t\w\g\z\e\k\e\v\o\m\h\b\q\h\k\b\c\n\0\f\2\a\k\l\g\q\0\g\3\k\q\2\g\6\3\j\v\d\v\8\1\r\n\x\2\4\m\k\r\r\v\s\2\6\d\0\e\r\u\m\z\o\4\k\0\6\w\3\w\c\d\9\t\j\o\t\p\e\k\b\6\g\p\h\p\2\b\6\0 ]] 00:09:12.296 13:57:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:12.296 13:57:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:12.557 [2024-07-25 13:57:21.601147] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:12.557 [2024-07-25 13:57:21.601221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62772 ] 00:09:12.557 [2024-07-25 13:57:21.740585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.557 [2024-07-25 13:57:21.847734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.815 [2024-07-25 13:57:21.889880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:12.815  Copying: 512/512 [B] (average 100 kBps) 00:09:12.815 00:09:13.074 13:57:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ e01zm6hukjohx1eti6bzzt17sd4hkdzu69etsh78t8y1rbn5zpgj9n2xsd3jpl49wmc89dzi2vc8ty18jc06gc28cis981586rb5p4kc6lphzr4z2zl0b2udj6ubmw7i762m1qlzeftqtu6rmx8cs1cyawzca77d1wmbiux1ocdpic8vjqph4vmdi7jr515osqgz485mwvjpljkiufkop4akfs9iw5dl32bgp52uvbae3n2uv1t9o6olbt2kpme61bur2dhb2lu7dzyw2des6c7x0b9vpdsuyjfqlxsimfaqidm3ej22kk7w5mn4c3qu01ih0411qq9mtrxrkq5f8hicgeo8kncscacu9ov2t3qkokv6hle23owqx6t2tkor8vcy7z64bckmccb7fo7bnnlcybeu49tdpx1s8oatwgzekevomhbqhkbcn0f2aklgq0g3kq2g63jvdv81rnx24mkrrvs26d0erumzo4k06w3wcd9tjotpekb6gphp2b60 == \e\0\1\z\m\6\h\u\k\j\o\h\x\1\e\t\i\6\b\z\z\t\1\7\s\d\4\h\k\d\z\u\6\9\e\t\s\h\7\8\t\8\y\1\r\b\n\5\z\p\g\j\9\n\2\x\s\d\3\j\p\l\4\9\w\m\c\8\9\d\z\i\2\v\c\8\t\y\1\8\j\c\0\6\g\c\2\8\c\i\s\9\8\1\5\8\6\r\b\5\p\4\k\c\6\l\p\h\z\r\4\z\2\z\l\0\b\2\u\d\j\6\u\b\m\w\7\i\7\6\2\m\1\q\l\z\e\f\t\q\t\u\6\r\m\x\8\c\s\1\c\y\a\w\z\c\a\7\7\d\1\w\m\b\i\u\x\1\o\c\d\p\i\c\8\v\j\q\p\h\4\v\m\d\i\7\j\r\5\1\5\o\s\q\g\z\4\8\5\m\w\v\j\p\l\j\k\i\u\f\k\o\p\4\a\k\f\s\9\i\w\5\d\l\3\2\b\g\p\5\2\u\v\b\a\e\3\n\2\u\v\1\t\9\o\6\o\l\b\t\2\k\p\m\e\6\1\b\u\r\2\d\h\b\2\l\u\7\d\z\y\w\2\d\e\s\6\c\7\x\0\b\9\v\p\d\s\u\y\j\f\q\l\x\s\i\m\f\a\q\i\d\m\3\e\j\2\2\k\k\7\w\5\m\n\4\c\3\q\u\0\1\i\h\0\4\1\1\q\q\9\m\t\r\x\r\k\q\5\f\8\h\i\c\g\e\o\8\k\n\c\s\c\a\c\u\9\o\v\2\t\3\q\k\o\k\v\6\h\l\e\2\3\o\w\q\x\6\t\2\t\k\o\r\8\v\c\y\7\z\6\4\b\c\k\m\c\c\b\7\f\o\7\b\n\n\l\c\y\b\e\u\4\9\t\d\p\x\1\s\8\o\a\t\w\g\z\e\k\e\v\o\m\h\b\q\h\k\b\c\n\0\f\2\a\k\l\g\q\0\g\3\k\q\2\g\6\3\j\v\d\v\8\1\r\n\x\2\4\m\k\r\r\v\s\2\6\d\0\e\r\u\m\z\o\4\k\0\6\w\3\w\c\d\9\t\j\o\t\p\e\k\b\6\g\p\h\p\2\b\6\0 ]] 00:09:13.074 13:57:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:13.074 13:57:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:13.074 [2024-07-25 13:57:22.175587] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:13.074 [2024-07-25 13:57:22.175659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62785 ] 00:09:13.074 [2024-07-25 13:57:22.312816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.333 [2024-07-25 13:57:22.418784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.333 [2024-07-25 13:57:22.460769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:13.592  Copying: 512/512 [B] (average 250 kBps) 00:09:13.592 00:09:13.592 13:57:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ e01zm6hukjohx1eti6bzzt17sd4hkdzu69etsh78t8y1rbn5zpgj9n2xsd3jpl49wmc89dzi2vc8ty18jc06gc28cis981586rb5p4kc6lphzr4z2zl0b2udj6ubmw7i762m1qlzeftqtu6rmx8cs1cyawzca77d1wmbiux1ocdpic8vjqph4vmdi7jr515osqgz485mwvjpljkiufkop4akfs9iw5dl32bgp52uvbae3n2uv1t9o6olbt2kpme61bur2dhb2lu7dzyw2des6c7x0b9vpdsuyjfqlxsimfaqidm3ej22kk7w5mn4c3qu01ih0411qq9mtrxrkq5f8hicgeo8kncscacu9ov2t3qkokv6hle23owqx6t2tkor8vcy7z64bckmccb7fo7bnnlcybeu49tdpx1s8oatwgzekevomhbqhkbcn0f2aklgq0g3kq2g63jvdv81rnx24mkrrvs26d0erumzo4k06w3wcd9tjotpekb6gphp2b60 == \e\0\1\z\m\6\h\u\k\j\o\h\x\1\e\t\i\6\b\z\z\t\1\7\s\d\4\h\k\d\z\u\6\9\e\t\s\h\7\8\t\8\y\1\r\b\n\5\z\p\g\j\9\n\2\x\s\d\3\j\p\l\4\9\w\m\c\8\9\d\z\i\2\v\c\8\t\y\1\8\j\c\0\6\g\c\2\8\c\i\s\9\8\1\5\8\6\r\b\5\p\4\k\c\6\l\p\h\z\r\4\z\2\z\l\0\b\2\u\d\j\6\u\b\m\w\7\i\7\6\2\m\1\q\l\z\e\f\t\q\t\u\6\r\m\x\8\c\s\1\c\y\a\w\z\c\a\7\7\d\1\w\m\b\i\u\x\1\o\c\d\p\i\c\8\v\j\q\p\h\4\v\m\d\i\7\j\r\5\1\5\o\s\q\g\z\4\8\5\m\w\v\j\p\l\j\k\i\u\f\k\o\p\4\a\k\f\s\9\i\w\5\d\l\3\2\b\g\p\5\2\u\v\b\a\e\3\n\2\u\v\1\t\9\o\6\o\l\b\t\2\k\p\m\e\6\1\b\u\r\2\d\h\b\2\l\u\7\d\z\y\w\2\d\e\s\6\c\7\x\0\b\9\v\p\d\s\u\y\j\f\q\l\x\s\i\m\f\a\q\i\d\m\3\e\j\2\2\k\k\7\w\5\m\n\4\c\3\q\u\0\1\i\h\0\4\1\1\q\q\9\m\t\r\x\r\k\q\5\f\8\h\i\c\g\e\o\8\k\n\c\s\c\a\c\u\9\o\v\2\t\3\q\k\o\k\v\6\h\l\e\2\3\o\w\q\x\6\t\2\t\k\o\r\8\v\c\y\7\z\6\4\b\c\k\m\c\c\b\7\f\o\7\b\n\n\l\c\y\b\e\u\4\9\t\d\p\x\1\s\8\o\a\t\w\g\z\e\k\e\v\o\m\h\b\q\h\k\b\c\n\0\f\2\a\k\l\g\q\0\g\3\k\q\2\g\6\3\j\v\d\v\8\1\r\n\x\2\4\m\k\r\r\v\s\2\6\d\0\e\r\u\m\z\o\4\k\0\6\w\3\w\c\d\9\t\j\o\t\p\e\k\b\6\g\p\h\p\2\b\6\0 ]] 00:09:13.592 13:57:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:13.592 13:57:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:13.592 13:57:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:13.593 13:57:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:13.593 13:57:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:13.593 13:57:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:13.593 [2024-07-25 13:57:22.772883] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:13.593 [2024-07-25 13:57:22.772989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62787 ] 00:09:13.851 [2024-07-25 13:57:22.917837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.851 [2024-07-25 13:57:23.024618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.851 [2024-07-25 13:57:23.067524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.109  Copying: 512/512 [B] (average 500 kBps) 00:09:14.109 00:09:14.109 13:57:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ njyxavbw4vh2ygiwqstjsrb3lk24kjb1bmrugnha40eco2llk942a60ojbtn4admggu5du7iu2o12m6f3uexg7s092a860w22rvt4tri405js7k182thcj6ni2xom5nrlr14610kn4qwrdgprqlu4av5qpxdf2hdqjqf0eg964ty8kckv5538f33429cdfc06kgvcloy0k7ospjjcq3xru5segsyjkl3fsia44u8s614axtpnkgge9edm0z4i0kea0wcpex002rtng0jsb2yh8b12zv5izxxbbaf8p2upbuq20h8bklbg2yhwyrwtqxvwjhhu9pzwl8mbh87017yue8r33u6vrpdu78225s8fvx5hbj0vwbiber7scje8ndgi4n648549wbn46xtbj439p59pgkcw1j9hbxmxz9k9lqsaq8v7r84q0pfi7yparfnlv55bw2rnl4mh3cs1ytnsnmlt32mp7q94opmbvjgyswpdchx0952rrpw2a421q2c == \n\j\y\x\a\v\b\w\4\v\h\2\y\g\i\w\q\s\t\j\s\r\b\3\l\k\2\4\k\j\b\1\b\m\r\u\g\n\h\a\4\0\e\c\o\2\l\l\k\9\4\2\a\6\0\o\j\b\t\n\4\a\d\m\g\g\u\5\d\u\7\i\u\2\o\1\2\m\6\f\3\u\e\x\g\7\s\0\9\2\a\8\6\0\w\2\2\r\v\t\4\t\r\i\4\0\5\j\s\7\k\1\8\2\t\h\c\j\6\n\i\2\x\o\m\5\n\r\l\r\1\4\6\1\0\k\n\4\q\w\r\d\g\p\r\q\l\u\4\a\v\5\q\p\x\d\f\2\h\d\q\j\q\f\0\e\g\9\6\4\t\y\8\k\c\k\v\5\5\3\8\f\3\3\4\2\9\c\d\f\c\0\6\k\g\v\c\l\o\y\0\k\7\o\s\p\j\j\c\q\3\x\r\u\5\s\e\g\s\y\j\k\l\3\f\s\i\a\4\4\u\8\s\6\1\4\a\x\t\p\n\k\g\g\e\9\e\d\m\0\z\4\i\0\k\e\a\0\w\c\p\e\x\0\0\2\r\t\n\g\0\j\s\b\2\y\h\8\b\1\2\z\v\5\i\z\x\x\b\b\a\f\8\p\2\u\p\b\u\q\2\0\h\8\b\k\l\b\g\2\y\h\w\y\r\w\t\q\x\v\w\j\h\h\u\9\p\z\w\l\8\m\b\h\8\7\0\1\7\y\u\e\8\r\3\3\u\6\v\r\p\d\u\7\8\2\2\5\s\8\f\v\x\5\h\b\j\0\v\w\b\i\b\e\r\7\s\c\j\e\8\n\d\g\i\4\n\6\4\8\5\4\9\w\b\n\4\6\x\t\b\j\4\3\9\p\5\9\p\g\k\c\w\1\j\9\h\b\x\m\x\z\9\k\9\l\q\s\a\q\8\v\7\r\8\4\q\0\p\f\i\7\y\p\a\r\f\n\l\v\5\5\b\w\2\r\n\l\4\m\h\3\c\s\1\y\t\n\s\n\m\l\t\3\2\m\p\7\q\9\4\o\p\m\b\v\j\g\y\s\w\p\d\c\h\x\0\9\5\2\r\r\p\w\2\a\4\2\1\q\2\c ]] 00:09:14.109 13:57:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:14.109 13:57:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:14.109 [2024-07-25 13:57:23.348898] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:14.109 [2024-07-25 13:57:23.348990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62800 ] 00:09:14.365 [2024-07-25 13:57:23.492506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.365 [2024-07-25 13:57:23.598063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.365 [2024-07-25 13:57:23.639577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.623  Copying: 512/512 [B] (average 500 kBps) 00:09:14.623 00:09:14.623 13:57:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ njyxavbw4vh2ygiwqstjsrb3lk24kjb1bmrugnha40eco2llk942a60ojbtn4admggu5du7iu2o12m6f3uexg7s092a860w22rvt4tri405js7k182thcj6ni2xom5nrlr14610kn4qwrdgprqlu4av5qpxdf2hdqjqf0eg964ty8kckv5538f33429cdfc06kgvcloy0k7ospjjcq3xru5segsyjkl3fsia44u8s614axtpnkgge9edm0z4i0kea0wcpex002rtng0jsb2yh8b12zv5izxxbbaf8p2upbuq20h8bklbg2yhwyrwtqxvwjhhu9pzwl8mbh87017yue8r33u6vrpdu78225s8fvx5hbj0vwbiber7scje8ndgi4n648549wbn46xtbj439p59pgkcw1j9hbxmxz9k9lqsaq8v7r84q0pfi7yparfnlv55bw2rnl4mh3cs1ytnsnmlt32mp7q94opmbvjgyswpdchx0952rrpw2a421q2c == \n\j\y\x\a\v\b\w\4\v\h\2\y\g\i\w\q\s\t\j\s\r\b\3\l\k\2\4\k\j\b\1\b\m\r\u\g\n\h\a\4\0\e\c\o\2\l\l\k\9\4\2\a\6\0\o\j\b\t\n\4\a\d\m\g\g\u\5\d\u\7\i\u\2\o\1\2\m\6\f\3\u\e\x\g\7\s\0\9\2\a\8\6\0\w\2\2\r\v\t\4\t\r\i\4\0\5\j\s\7\k\1\8\2\t\h\c\j\6\n\i\2\x\o\m\5\n\r\l\r\1\4\6\1\0\k\n\4\q\w\r\d\g\p\r\q\l\u\4\a\v\5\q\p\x\d\f\2\h\d\q\j\q\f\0\e\g\9\6\4\t\y\8\k\c\k\v\5\5\3\8\f\3\3\4\2\9\c\d\f\c\0\6\k\g\v\c\l\o\y\0\k\7\o\s\p\j\j\c\q\3\x\r\u\5\s\e\g\s\y\j\k\l\3\f\s\i\a\4\4\u\8\s\6\1\4\a\x\t\p\n\k\g\g\e\9\e\d\m\0\z\4\i\0\k\e\a\0\w\c\p\e\x\0\0\2\r\t\n\g\0\j\s\b\2\y\h\8\b\1\2\z\v\5\i\z\x\x\b\b\a\f\8\p\2\u\p\b\u\q\2\0\h\8\b\k\l\b\g\2\y\h\w\y\r\w\t\q\x\v\w\j\h\h\u\9\p\z\w\l\8\m\b\h\8\7\0\1\7\y\u\e\8\r\3\3\u\6\v\r\p\d\u\7\8\2\2\5\s\8\f\v\x\5\h\b\j\0\v\w\b\i\b\e\r\7\s\c\j\e\8\n\d\g\i\4\n\6\4\8\5\4\9\w\b\n\4\6\x\t\b\j\4\3\9\p\5\9\p\g\k\c\w\1\j\9\h\b\x\m\x\z\9\k\9\l\q\s\a\q\8\v\7\r\8\4\q\0\p\f\i\7\y\p\a\r\f\n\l\v\5\5\b\w\2\r\n\l\4\m\h\3\c\s\1\y\t\n\s\n\m\l\t\3\2\m\p\7\q\9\4\o\p\m\b\v\j\g\y\s\w\p\d\c\h\x\0\9\5\2\r\r\p\w\2\a\4\2\1\q\2\c ]] 00:09:14.623 13:57:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:14.623 13:57:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:14.623 [2024-07-25 13:57:23.909633] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:14.623 [2024-07-25 13:57:23.909711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62807 ] 00:09:14.881 [2024-07-25 13:57:24.051488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.881 [2024-07-25 13:57:24.157511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.139 [2024-07-25 13:57:24.199018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:15.666  Copying: 512/512 [B] (average 1673 Bps) 00:09:15.666 00:09:15.666 13:57:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ njyxavbw4vh2ygiwqstjsrb3lk24kjb1bmrugnha40eco2llk942a60ojbtn4admggu5du7iu2o12m6f3uexg7s092a860w22rvt4tri405js7k182thcj6ni2xom5nrlr14610kn4qwrdgprqlu4av5qpxdf2hdqjqf0eg964ty8kckv5538f33429cdfc06kgvcloy0k7ospjjcq3xru5segsyjkl3fsia44u8s614axtpnkgge9edm0z4i0kea0wcpex002rtng0jsb2yh8b12zv5izxxbbaf8p2upbuq20h8bklbg2yhwyrwtqxvwjhhu9pzwl8mbh87017yue8r33u6vrpdu78225s8fvx5hbj0vwbiber7scje8ndgi4n648549wbn46xtbj439p59pgkcw1j9hbxmxz9k9lqsaq8v7r84q0pfi7yparfnlv55bw2rnl4mh3cs1ytnsnmlt32mp7q94opmbvjgyswpdchx0952rrpw2a421q2c == \n\j\y\x\a\v\b\w\4\v\h\2\y\g\i\w\q\s\t\j\s\r\b\3\l\k\2\4\k\j\b\1\b\m\r\u\g\n\h\a\4\0\e\c\o\2\l\l\k\9\4\2\a\6\0\o\j\b\t\n\4\a\d\m\g\g\u\5\d\u\7\i\u\2\o\1\2\m\6\f\3\u\e\x\g\7\s\0\9\2\a\8\6\0\w\2\2\r\v\t\4\t\r\i\4\0\5\j\s\7\k\1\8\2\t\h\c\j\6\n\i\2\x\o\m\5\n\r\l\r\1\4\6\1\0\k\n\4\q\w\r\d\g\p\r\q\l\u\4\a\v\5\q\p\x\d\f\2\h\d\q\j\q\f\0\e\g\9\6\4\t\y\8\k\c\k\v\5\5\3\8\f\3\3\4\2\9\c\d\f\c\0\6\k\g\v\c\l\o\y\0\k\7\o\s\p\j\j\c\q\3\x\r\u\5\s\e\g\s\y\j\k\l\3\f\s\i\a\4\4\u\8\s\6\1\4\a\x\t\p\n\k\g\g\e\9\e\d\m\0\z\4\i\0\k\e\a\0\w\c\p\e\x\0\0\2\r\t\n\g\0\j\s\b\2\y\h\8\b\1\2\z\v\5\i\z\x\x\b\b\a\f\8\p\2\u\p\b\u\q\2\0\h\8\b\k\l\b\g\2\y\h\w\y\r\w\t\q\x\v\w\j\h\h\u\9\p\z\w\l\8\m\b\h\8\7\0\1\7\y\u\e\8\r\3\3\u\6\v\r\p\d\u\7\8\2\2\5\s\8\f\v\x\5\h\b\j\0\v\w\b\i\b\e\r\7\s\c\j\e\8\n\d\g\i\4\n\6\4\8\5\4\9\w\b\n\4\6\x\t\b\j\4\3\9\p\5\9\p\g\k\c\w\1\j\9\h\b\x\m\x\z\9\k\9\l\q\s\a\q\8\v\7\r\8\4\q\0\p\f\i\7\y\p\a\r\f\n\l\v\5\5\b\w\2\r\n\l\4\m\h\3\c\s\1\y\t\n\s\n\m\l\t\3\2\m\p\7\q\9\4\o\p\m\b\v\j\g\y\s\w\p\d\c\h\x\0\9\5\2\r\r\p\w\2\a\4\2\1\q\2\c ]] 00:09:15.666 13:57:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:15.666 13:57:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:15.666 [2024-07-25 13:57:24.799887] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:15.666 [2024-07-25 13:57:24.799964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62815 ] 00:09:15.666 [2024-07-25 13:57:24.940166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.937 [2024-07-25 13:57:25.047524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.937 [2024-07-25 13:57:25.090923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.454  Copying: 512/512 [B] (average 1651 Bps) 00:09:16.454 00:09:16.454 13:57:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ njyxavbw4vh2ygiwqstjsrb3lk24kjb1bmrugnha40eco2llk942a60ojbtn4admggu5du7iu2o12m6f3uexg7s092a860w22rvt4tri405js7k182thcj6ni2xom5nrlr14610kn4qwrdgprqlu4av5qpxdf2hdqjqf0eg964ty8kckv5538f33429cdfc06kgvcloy0k7ospjjcq3xru5segsyjkl3fsia44u8s614axtpnkgge9edm0z4i0kea0wcpex002rtng0jsb2yh8b12zv5izxxbbaf8p2upbuq20h8bklbg2yhwyrwtqxvwjhhu9pzwl8mbh87017yue8r33u6vrpdu78225s8fvx5hbj0vwbiber7scje8ndgi4n648549wbn46xtbj439p59pgkcw1j9hbxmxz9k9lqsaq8v7r84q0pfi7yparfnlv55bw2rnl4mh3cs1ytnsnmlt32mp7q94opmbvjgyswpdchx0952rrpw2a421q2c == \n\j\y\x\a\v\b\w\4\v\h\2\y\g\i\w\q\s\t\j\s\r\b\3\l\k\2\4\k\j\b\1\b\m\r\u\g\n\h\a\4\0\e\c\o\2\l\l\k\9\4\2\a\6\0\o\j\b\t\n\4\a\d\m\g\g\u\5\d\u\7\i\u\2\o\1\2\m\6\f\3\u\e\x\g\7\s\0\9\2\a\8\6\0\w\2\2\r\v\t\4\t\r\i\4\0\5\j\s\7\k\1\8\2\t\h\c\j\6\n\i\2\x\o\m\5\n\r\l\r\1\4\6\1\0\k\n\4\q\w\r\d\g\p\r\q\l\u\4\a\v\5\q\p\x\d\f\2\h\d\q\j\q\f\0\e\g\9\6\4\t\y\8\k\c\k\v\5\5\3\8\f\3\3\4\2\9\c\d\f\c\0\6\k\g\v\c\l\o\y\0\k\7\o\s\p\j\j\c\q\3\x\r\u\5\s\e\g\s\y\j\k\l\3\f\s\i\a\4\4\u\8\s\6\1\4\a\x\t\p\n\k\g\g\e\9\e\d\m\0\z\4\i\0\k\e\a\0\w\c\p\e\x\0\0\2\r\t\n\g\0\j\s\b\2\y\h\8\b\1\2\z\v\5\i\z\x\x\b\b\a\f\8\p\2\u\p\b\u\q\2\0\h\8\b\k\l\b\g\2\y\h\w\y\r\w\t\q\x\v\w\j\h\h\u\9\p\z\w\l\8\m\b\h\8\7\0\1\7\y\u\e\8\r\3\3\u\6\v\r\p\d\u\7\8\2\2\5\s\8\f\v\x\5\h\b\j\0\v\w\b\i\b\e\r\7\s\c\j\e\8\n\d\g\i\4\n\6\4\8\5\4\9\w\b\n\4\6\x\t\b\j\4\3\9\p\5\9\p\g\k\c\w\1\j\9\h\b\x\m\x\z\9\k\9\l\q\s\a\q\8\v\7\r\8\4\q\0\p\f\i\7\y\p\a\r\f\n\l\v\5\5\b\w\2\r\n\l\4\m\h\3\c\s\1\y\t\n\s\n\m\l\t\3\2\m\p\7\q\9\4\o\p\m\b\v\j\g\y\s\w\p\d\c\h\x\0\9\5\2\r\r\p\w\2\a\4\2\1\q\2\c ]] 00:09:16.454 00:09:16.454 real 0m5.250s 00:09:16.454 user 0m2.683s 00:09:16.454 sys 0m0.976s 00:09:16.454 13:57:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.454 13:57:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:16.454 ************************************ 00:09:16.454 END TEST dd_flags_misc_forced_aio 00:09:16.454 ************************************ 00:09:16.454 13:57:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:09:16.454 13:57:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:16.454 13:57:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:16.454 ************************************ 00:09:16.454 END TEST spdk_dd_posix 00:09:16.454 ************************************ 00:09:16.454 00:09:16.454 real 0m21.327s 00:09:16.454 user 0m10.682s 00:09:16.454 sys 0m5.734s 00:09:16.454 13:57:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.454 13:57:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:16.714 13:57:25 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:16.714 13:57:25 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:16.714 13:57:25 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.714 13:57:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:16.714 ************************************ 00:09:16.714 START TEST spdk_dd_malloc 00:09:16.714 ************************************ 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:16.714 * Looking for test storage... 00:09:16.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:16.714 ************************************ 00:09:16.714 START TEST dd_malloc_copy 00:09:16.714 ************************************ 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:09:16.714 13:57:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:09:16.715 13:57:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:09:16.715 13:57:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:16.715 13:57:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:09:16.715 13:57:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:09:16.715 13:57:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:09:16.715 13:57:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:09:16.715 13:57:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:09:16.715 13:57:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:16.715 13:57:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:16.715 [2024-07-25 13:57:25.937976] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:16.715 [2024-07-25 13:57:25.938513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62889 ] 00:09:16.715 { 00:09:16.715 "subsystems": [ 00:09:16.715 { 00:09:16.715 "subsystem": "bdev", 00:09:16.715 "config": [ 00:09:16.715 { 00:09:16.715 "params": { 00:09:16.715 "block_size": 512, 00:09:16.715 "num_blocks": 1048576, 00:09:16.715 "name": "malloc0" 00:09:16.715 }, 00:09:16.715 "method": "bdev_malloc_create" 00:09:16.715 }, 00:09:16.715 { 00:09:16.715 "params": { 00:09:16.715 "block_size": 512, 00:09:16.715 "num_blocks": 1048576, 00:09:16.715 "name": "malloc1" 00:09:16.715 }, 00:09:16.715 "method": "bdev_malloc_create" 00:09:16.715 }, 00:09:16.715 { 00:09:16.715 "method": "bdev_wait_for_examine" 00:09:16.715 } 00:09:16.715 ] 00:09:16.715 } 00:09:16.715 ] 00:09:16.715 } 00:09:16.973 [2024-07-25 13:57:26.077978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.973 [2024-07-25 13:57:26.182910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.973 [2024-07-25 13:57:26.227118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:20.420  Copying: 210/512 [MB] (210 MBps) Copying: 414/512 [MB] (204 MBps) Copying: 512/512 [MB] (average 209 MBps) 00:09:20.420 00:09:20.420 13:57:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:20.420 13:57:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:09:20.420 13:57:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:20.420 13:57:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:20.420 [2024-07-25 13:57:29.495046] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:20.420 [2024-07-25 13:57:29.495124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62942 ] 00:09:20.420 { 00:09:20.420 "subsystems": [ 00:09:20.420 { 00:09:20.420 "subsystem": "bdev", 00:09:20.420 "config": [ 00:09:20.420 { 00:09:20.420 "params": { 00:09:20.420 "block_size": 512, 00:09:20.420 "num_blocks": 1048576, 00:09:20.420 "name": "malloc0" 00:09:20.420 }, 00:09:20.420 "method": "bdev_malloc_create" 00:09:20.420 }, 00:09:20.420 { 00:09:20.420 "params": { 00:09:20.420 "block_size": 512, 00:09:20.420 "num_blocks": 1048576, 00:09:20.420 "name": "malloc1" 00:09:20.420 }, 00:09:20.420 "method": "bdev_malloc_create" 00:09:20.420 }, 00:09:20.420 { 00:09:20.420 "method": "bdev_wait_for_examine" 00:09:20.420 } 00:09:20.420 ] 00:09:20.420 } 00:09:20.420 ] 00:09:20.420 } 00:09:20.420 [2024-07-25 13:57:29.616816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.421 [2024-07-25 13:57:29.723394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.679 [2024-07-25 13:57:29.767355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:23.813  Copying: 217/512 [MB] (217 MBps) Copying: 421/512 [MB] (203 MBps) Copying: 512/512 [MB] (average 209 MBps) 00:09:23.813 00:09:23.813 ************************************ 00:09:23.813 END TEST dd_malloc_copy 00:09:23.813 ************************************ 00:09:23.813 00:09:23.813 real 0m7.092s 00:09:23.813 user 0m6.238s 00:09:23.813 sys 0m0.699s 00:09:23.813 13:57:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.813 13:57:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:23.813 ************************************ 00:09:23.813 END TEST spdk_dd_malloc 00:09:23.813 ************************************ 00:09:23.813 00:09:23.813 real 0m7.250s 00:09:23.813 user 0m6.308s 00:09:23.813 sys 0m0.790s 00:09:23.813 13:57:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.813 13:57:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:23.813 13:57:33 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:23.813 13:57:33 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:23.813 13:57:33 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.813 13:57:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:23.813 ************************************ 00:09:23.813 START TEST spdk_dd_bdev_to_bdev 00:09:23.813 ************************************ 00:09:23.813 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:24.072 * Looking for test storage... 00:09:24.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:24.072 ************************************ 00:09:24.072 START TEST dd_inflate_file 00:09:24.072 ************************************ 00:09:24.072 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:24.072 [2024-07-25 13:57:33.214451] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:24.072 [2024-07-25 13:57:33.214659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63041 ] 00:09:24.072 [2024-07-25 13:57:33.356004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.329 [2024-07-25 13:57:33.464618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.330 [2024-07-25 13:57:33.507469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:24.588  Copying: 64/64 [MB] (average 1142 MBps) 00:09:24.588 00:09:24.588 00:09:24.588 real 0m0.607s 00:09:24.588 user 0m0.384s 00:09:24.588 sys 0m0.296s 00:09:24.588 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.588 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:09:24.588 ************************************ 00:09:24.588 END TEST dd_inflate_file 00:09:24.588 ************************************ 00:09:24.588 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:24.588 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:24.588 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:24.588 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:24.588 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:24.588 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:24.588 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:24.588 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.588 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:24.588 ************************************ 00:09:24.588 START TEST dd_copy_to_out_bdev 00:09:24.588 ************************************ 00:09:24.588 13:57:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:24.588 { 00:09:24.588 "subsystems": [ 00:09:24.588 { 00:09:24.588 "subsystem": "bdev", 00:09:24.588 "config": [ 00:09:24.588 { 00:09:24.588 "params": { 00:09:24.588 "trtype": "pcie", 00:09:24.588 "traddr": "0000:00:10.0", 00:09:24.588 "name": "Nvme0" 00:09:24.588 }, 00:09:24.588 "method": "bdev_nvme_attach_controller" 00:09:24.588 }, 00:09:24.588 { 00:09:24.588 "params": { 00:09:24.588 "trtype": "pcie", 00:09:24.588 "traddr": "0000:00:11.0", 00:09:24.588 "name": "Nvme1" 00:09:24.588 }, 00:09:24.588 "method": "bdev_nvme_attach_controller" 00:09:24.588 }, 00:09:24.588 { 00:09:24.588 "method": "bdev_wait_for_examine" 00:09:24.588 } 00:09:24.588 ] 00:09:24.588 } 00:09:24.588 ] 00:09:24.588 } 00:09:24.588 [2024-07-25 13:57:33.889484] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:24.588 [2024-07-25 13:57:33.889641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63082 ] 00:09:24.846 [2024-07-25 13:57:34.028151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.846 [2024-07-25 13:57:34.136704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.104 [2024-07-25 13:57:34.180523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:26.608  Copying: 64/64 [MB] (average 64 MBps) 00:09:26.608 00:09:26.608 00:09:26.608 real 0m1.820s 00:09:26.608 user 0m1.550s 00:09:26.608 sys 0m1.447s 00:09:26.608 ************************************ 00:09:26.608 END TEST dd_copy_to_out_bdev 00:09:26.608 ************************************ 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:26.608 ************************************ 00:09:26.608 START TEST dd_offset_magic 00:09:26.608 ************************************ 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:26.608 13:57:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:26.608 [2024-07-25 13:57:35.769405] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:26.608 [2024-07-25 13:57:35.769613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63125 ] 00:09:26.608 { 00:09:26.608 "subsystems": [ 00:09:26.608 { 00:09:26.608 "subsystem": "bdev", 00:09:26.608 "config": [ 00:09:26.608 { 00:09:26.608 "params": { 00:09:26.608 "trtype": "pcie", 00:09:26.608 "traddr": "0000:00:10.0", 00:09:26.608 "name": "Nvme0" 00:09:26.608 }, 00:09:26.608 "method": "bdev_nvme_attach_controller" 00:09:26.608 }, 00:09:26.608 { 00:09:26.608 "params": { 00:09:26.608 "trtype": "pcie", 00:09:26.608 "traddr": "0000:00:11.0", 00:09:26.608 "name": "Nvme1" 00:09:26.608 }, 00:09:26.608 "method": "bdev_nvme_attach_controller" 00:09:26.608 }, 00:09:26.608 { 00:09:26.608 "method": "bdev_wait_for_examine" 00:09:26.608 } 00:09:26.608 ] 00:09:26.608 } 00:09:26.608 ] 00:09:26.608 } 00:09:26.608 [2024-07-25 13:57:35.909557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.868 [2024-07-25 13:57:36.016834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.868 [2024-07-25 13:57:36.060116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:27.384  Copying: 65/65 [MB] (average 631 MBps) 00:09:27.384 00:09:27.384 13:57:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:27.384 13:57:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:27.384 13:57:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:27.384 13:57:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:27.643 [2024-07-25 13:57:36.716469] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:27.643 [2024-07-25 13:57:36.716544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63145 ] 00:09:27.643 { 00:09:27.643 "subsystems": [ 00:09:27.643 { 00:09:27.643 "subsystem": "bdev", 00:09:27.643 "config": [ 00:09:27.643 { 00:09:27.643 "params": { 00:09:27.643 "trtype": "pcie", 00:09:27.643 "traddr": "0000:00:10.0", 00:09:27.643 "name": "Nvme0" 00:09:27.643 }, 00:09:27.643 "method": "bdev_nvme_attach_controller" 00:09:27.643 }, 00:09:27.643 { 00:09:27.643 "params": { 00:09:27.643 "trtype": "pcie", 00:09:27.643 "traddr": "0000:00:11.0", 00:09:27.643 "name": "Nvme1" 00:09:27.643 }, 00:09:27.643 "method": "bdev_nvme_attach_controller" 00:09:27.643 }, 00:09:27.643 { 00:09:27.643 "method": "bdev_wait_for_examine" 00:09:27.643 } 00:09:27.643 ] 00:09:27.643 } 00:09:27.643 ] 00:09:27.643 } 00:09:27.643 [2024-07-25 13:57:36.840930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.900 [2024-07-25 13:57:36.949065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.900 [2024-07-25 13:57:36.992595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.157  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:28.157 00:09:28.157 13:57:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:28.157 13:57:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:28.157 13:57:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:28.157 13:57:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:28.157 13:57:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:28.157 13:57:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:28.157 13:57:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:28.157 [2024-07-25 13:57:37.413946] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:28.157 [2024-07-25 13:57:37.414107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63162 ] 00:09:28.157 { 00:09:28.157 "subsystems": [ 00:09:28.157 { 00:09:28.157 "subsystem": "bdev", 00:09:28.157 "config": [ 00:09:28.157 { 00:09:28.157 "params": { 00:09:28.157 "trtype": "pcie", 00:09:28.157 "traddr": "0000:00:10.0", 00:09:28.157 "name": "Nvme0" 00:09:28.157 }, 00:09:28.157 "method": "bdev_nvme_attach_controller" 00:09:28.157 }, 00:09:28.157 { 00:09:28.157 "params": { 00:09:28.157 "trtype": "pcie", 00:09:28.157 "traddr": "0000:00:11.0", 00:09:28.157 "name": "Nvme1" 00:09:28.157 }, 00:09:28.157 "method": "bdev_nvme_attach_controller" 00:09:28.157 }, 00:09:28.157 { 00:09:28.157 "method": "bdev_wait_for_examine" 00:09:28.157 } 00:09:28.157 ] 00:09:28.157 } 00:09:28.157 ] 00:09:28.157 } 00:09:28.415 [2024-07-25 13:57:37.550356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.415 [2024-07-25 13:57:37.657372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.415 [2024-07-25 13:57:37.700818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:29.240  Copying: 65/65 [MB] (average 773 MBps) 00:09:29.240 00:09:29.240 13:57:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:29.240 13:57:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:29.240 13:57:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:29.240 13:57:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:29.240 [2024-07-25 13:57:38.347954] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:29.240 [2024-07-25 13:57:38.348032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63176 ] 00:09:29.240 { 00:09:29.240 "subsystems": [ 00:09:29.240 { 00:09:29.240 "subsystem": "bdev", 00:09:29.240 "config": [ 00:09:29.240 { 00:09:29.240 "params": { 00:09:29.240 "trtype": "pcie", 00:09:29.240 "traddr": "0000:00:10.0", 00:09:29.240 "name": "Nvme0" 00:09:29.240 }, 00:09:29.240 "method": "bdev_nvme_attach_controller" 00:09:29.240 }, 00:09:29.240 { 00:09:29.240 "params": { 00:09:29.240 "trtype": "pcie", 00:09:29.240 "traddr": "0000:00:11.0", 00:09:29.240 "name": "Nvme1" 00:09:29.240 }, 00:09:29.240 "method": "bdev_nvme_attach_controller" 00:09:29.240 }, 00:09:29.240 { 00:09:29.240 "method": "bdev_wait_for_examine" 00:09:29.240 } 00:09:29.240 ] 00:09:29.240 } 00:09:29.240 ] 00:09:29.240 } 00:09:29.240 [2024-07-25 13:57:38.488854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.500 [2024-07-25 13:57:38.594610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.500 [2024-07-25 13:57:38.637974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:29.759  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:29.759 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:29.759 00:09:29.759 real 0m3.283s 00:09:29.759 user 0m2.462s 00:09:29.759 sys 0m0.872s 00:09:29.759 ************************************ 00:09:29.759 END TEST dd_offset_magic 00:09:29.759 ************************************ 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:29.759 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:30.019 [2024-07-25 13:57:39.097957] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:30.019 [2024-07-25 13:57:39.098107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63213 ] 00:09:30.019 { 00:09:30.019 "subsystems": [ 00:09:30.019 { 00:09:30.019 "subsystem": "bdev", 00:09:30.019 "config": [ 00:09:30.019 { 00:09:30.019 "params": { 00:09:30.019 "trtype": "pcie", 00:09:30.019 "traddr": "0000:00:10.0", 00:09:30.019 "name": "Nvme0" 00:09:30.019 }, 00:09:30.019 "method": "bdev_nvme_attach_controller" 00:09:30.019 }, 00:09:30.019 { 00:09:30.019 "params": { 00:09:30.019 "trtype": "pcie", 00:09:30.019 "traddr": "0000:00:11.0", 00:09:30.019 "name": "Nvme1" 00:09:30.019 }, 00:09:30.019 "method": "bdev_nvme_attach_controller" 00:09:30.019 }, 00:09:30.019 { 00:09:30.019 "method": "bdev_wait_for_examine" 00:09:30.019 } 00:09:30.019 ] 00:09:30.019 } 00:09:30.019 ] 00:09:30.019 } 00:09:30.019 [2024-07-25 13:57:39.221476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.278 [2024-07-25 13:57:39.325138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.278 [2024-07-25 13:57:39.368593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:30.536  Copying: 5120/5120 [kB] (average 833 MBps) 00:09:30.536 00:09:30.536 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:30.536 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:30.536 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:30.536 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:30.536 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:30.536 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:30.536 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:30.536 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:30.536 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:30.536 13:57:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:30.536 [2024-07-25 13:57:39.796548] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:30.536 [2024-07-25 13:57:39.796625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63229 ] 00:09:30.536 { 00:09:30.536 "subsystems": [ 00:09:30.536 { 00:09:30.536 "subsystem": "bdev", 00:09:30.536 "config": [ 00:09:30.536 { 00:09:30.536 "params": { 00:09:30.536 "trtype": "pcie", 00:09:30.536 "traddr": "0000:00:10.0", 00:09:30.536 "name": "Nvme0" 00:09:30.536 }, 00:09:30.536 "method": "bdev_nvme_attach_controller" 00:09:30.536 }, 00:09:30.536 { 00:09:30.536 "params": { 00:09:30.536 "trtype": "pcie", 00:09:30.536 "traddr": "0000:00:11.0", 00:09:30.536 "name": "Nvme1" 00:09:30.536 }, 00:09:30.536 "method": "bdev_nvme_attach_controller" 00:09:30.536 }, 00:09:30.536 { 00:09:30.536 "method": "bdev_wait_for_examine" 00:09:30.536 } 00:09:30.536 ] 00:09:30.536 } 00:09:30.536 ] 00:09:30.536 } 00:09:30.795 [2024-07-25 13:57:39.934453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.795 [2024-07-25 13:57:40.043448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.795 [2024-07-25 13:57:40.091559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:31.313  Copying: 5120/5120 [kB] (average 625 MBps) 00:09:31.313 00:09:31.313 13:57:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:31.313 ************************************ 00:09:31.313 END TEST spdk_dd_bdev_to_bdev 00:09:31.313 ************************************ 00:09:31.313 00:09:31.313 real 0m7.438s 00:09:31.313 user 0m5.538s 00:09:31.313 sys 0m3.336s 00:09:31.313 13:57:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.313 13:57:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:31.313 13:57:40 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:31.313 13:57:40 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:31.313 13:57:40 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:31.313 13:57:40 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.313 13:57:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:31.313 ************************************ 00:09:31.313 START TEST spdk_dd_uring 00:09:31.313 ************************************ 00:09:31.313 13:57:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:31.571 * Looking for test storage... 00:09:31.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:31.571 13:57:40 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.571 13:57:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.571 13:57:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.571 13:57:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.571 13:57:40 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.571 13:57:40 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:31.572 ************************************ 00:09:31.572 START TEST dd_uring_copy 00:09:31.572 ************************************ 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=yvclvu2zs84rlnm8o780tcyb4zyr578cik7krj63wqdvb4oxzm0ikpb9reay7m4q06yhwi5msxb0v8cid88enbx40xrds6z70tvynggpna882yiwq9csrts4okvpimvj80rr3gvh3t6sjnywmlp9zv7xlw5f3qsmjcvnyl8ah0fb9q03ltl5oja63i7b3cm5gv9hc2i7183b2i4yfeyv6okzv9gz1bvx3m9gz0t5wm5kalur1q928jg3zotj5qgvj8v8s0i1un1odcc04qma4giyj3c1zxl9spsikldlysuey9l2g1rh2xd186x5380xtu7uy5ct0v42ocz9zkb4w1j59imcm347fhi8nqs24xe8h6ih5ir2ctnm8wly91p6tzlegp2r7o18sah9u7k5ivc80cjwkvhr96n4iion65jbf0ja9ijfvrvyvw3uh587cftkdsz3o4irc8my9dbbm51srup8ab2od5k8eaq5pihv85i58h2903po9tu8v7jiuk47ehiqbxlxc3hgkwssmjni1b7f64mhl2t428f6lhyynzf7x8rirq2zaesdt3ol5jwml9p0k2uvsghyt8qhjjg61zg66b2y2spq0vivnpa71lm191q8d4swyml3s227z9ih7th2vdgoqsfbfv7hj31wwt41kzwznbg04sge55bu11jznxioqn91tp6ttjlf9cudmp4h5feo3j8c9o8l6uw1ncpu02dhseaujzq01hajjl2xord2zwst2kdfs6wuork9a5rdv0b0fzsuc64n6u64ecm5tfoha1fkmiaj4ltaie84h86662z1gutku9cbi72t2uz5d6052lvzi9hu6a870cn49uvoqhgs3vny9ix20kjz4swpprqgpkbeu9v1443qvn3ex1z2cvf5troehrl24645yv5ow7ym3s0eavqcunak64d46xni72867gfvcodh8x2mzoeq2lsiszzyll1uf17r9ea8do5dbaskhvijlwnzswakpaw6lz2ep8gi 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo yvclvu2zs84rlnm8o780tcyb4zyr578cik7krj63wqdvb4oxzm0ikpb9reay7m4q06yhwi5msxb0v8cid88enbx40xrds6z70tvynggpna882yiwq9csrts4okvpimvj80rr3gvh3t6sjnywmlp9zv7xlw5f3qsmjcvnyl8ah0fb9q03ltl5oja63i7b3cm5gv9hc2i7183b2i4yfeyv6okzv9gz1bvx3m9gz0t5wm5kalur1q928jg3zotj5qgvj8v8s0i1un1odcc04qma4giyj3c1zxl9spsikldlysuey9l2g1rh2xd186x5380xtu7uy5ct0v42ocz9zkb4w1j59imcm347fhi8nqs24xe8h6ih5ir2ctnm8wly91p6tzlegp2r7o18sah9u7k5ivc80cjwkvhr96n4iion65jbf0ja9ijfvrvyvw3uh587cftkdsz3o4irc8my9dbbm51srup8ab2od5k8eaq5pihv85i58h2903po9tu8v7jiuk47ehiqbxlxc3hgkwssmjni1b7f64mhl2t428f6lhyynzf7x8rirq2zaesdt3ol5jwml9p0k2uvsghyt8qhjjg61zg66b2y2spq0vivnpa71lm191q8d4swyml3s227z9ih7th2vdgoqsfbfv7hj31wwt41kzwznbg04sge55bu11jznxioqn91tp6ttjlf9cudmp4h5feo3j8c9o8l6uw1ncpu02dhseaujzq01hajjl2xord2zwst2kdfs6wuork9a5rdv0b0fzsuc64n6u64ecm5tfoha1fkmiaj4ltaie84h86662z1gutku9cbi72t2uz5d6052lvzi9hu6a870cn49uvoqhgs3vny9ix20kjz4swpprqgpkbeu9v1443qvn3ex1z2cvf5troehrl24645yv5ow7ym3s0eavqcunak64d46xni72867gfvcodh8x2mzoeq2lsiszzyll1uf17r9ea8do5dbaskhvijlwnzswakpaw6lz2ep8gi 00:09:31.572 13:57:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:31.572 [2024-07-25 13:57:40.728328] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:31.572 [2024-07-25 13:57:40.728398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63300 ] 00:09:31.572 [2024-07-25 13:57:40.867063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.831 [2024-07-25 13:57:40.971745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.831 [2024-07-25 13:57:41.014002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:32.655  Copying: 511/511 [MB] (average 1414 MBps) 00:09:32.655 00:09:32.655 13:57:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:32.655 13:57:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:32.655 13:57:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:32.655 13:57:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:32.655 [2024-07-25 13:57:41.954136] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:32.655 [2024-07-25 13:57:41.954276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63316 ] 00:09:32.655 { 00:09:32.655 "subsystems": [ 00:09:32.655 { 00:09:32.655 "subsystem": "bdev", 00:09:32.655 "config": [ 00:09:32.655 { 00:09:32.655 "params": { 00:09:32.655 "block_size": 512, 00:09:32.655 "num_blocks": 1048576, 00:09:32.655 "name": "malloc0" 00:09:32.655 }, 00:09:32.655 "method": "bdev_malloc_create" 00:09:32.655 }, 00:09:32.655 { 00:09:32.655 "params": { 00:09:32.655 "filename": "/dev/zram1", 00:09:32.655 "name": "uring0" 00:09:32.655 }, 00:09:32.655 "method": "bdev_uring_create" 00:09:32.655 }, 00:09:32.655 { 00:09:32.655 "method": "bdev_wait_for_examine" 00:09:32.655 } 00:09:32.655 ] 00:09:32.655 } 00:09:32.655 ] 00:09:32.655 } 00:09:32.914 [2024-07-25 13:57:42.093289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.914 [2024-07-25 13:57:42.201101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.173 [2024-07-25 13:57:42.244353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.741  Copying: 248/512 [MB] (248 MBps) Copying: 493/512 [MB] (244 MBps) Copying: 512/512 [MB] (average 247 MBps) 00:09:35.741 00:09:35.741 13:57:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:35.741 13:57:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:35.741 13:57:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:35.741 13:57:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:35.741 [2024-07-25 13:57:44.878782] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:35.741 [2024-07-25 13:57:44.878950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63360 ] 00:09:35.741 { 00:09:35.741 "subsystems": [ 00:09:35.741 { 00:09:35.741 "subsystem": "bdev", 00:09:35.741 "config": [ 00:09:35.741 { 00:09:35.741 "params": { 00:09:35.741 "block_size": 512, 00:09:35.741 "num_blocks": 1048576, 00:09:35.741 "name": "malloc0" 00:09:35.741 }, 00:09:35.741 "method": "bdev_malloc_create" 00:09:35.741 }, 00:09:35.741 { 00:09:35.741 "params": { 00:09:35.741 "filename": "/dev/zram1", 00:09:35.741 "name": "uring0" 00:09:35.741 }, 00:09:35.741 "method": "bdev_uring_create" 00:09:35.741 }, 00:09:35.741 { 00:09:35.741 "method": "bdev_wait_for_examine" 00:09:35.741 } 00:09:35.741 ] 00:09:35.741 } 00:09:35.741 ] 00:09:35.741 } 00:09:35.741 [2024-07-25 13:57:45.019941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.999 [2024-07-25 13:57:45.128563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.999 [2024-07-25 13:57:45.172735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:39.144  Copying: 204/512 [MB] (204 MBps) Copying: 393/512 [MB] (188 MBps) Copying: 512/512 [MB] (average 198 MBps) 00:09:39.144 00:09:39.144 13:57:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:39.144 13:57:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ yvclvu2zs84rlnm8o780tcyb4zyr578cik7krj63wqdvb4oxzm0ikpb9reay7m4q06yhwi5msxb0v8cid88enbx40xrds6z70tvynggpna882yiwq9csrts4okvpimvj80rr3gvh3t6sjnywmlp9zv7xlw5f3qsmjcvnyl8ah0fb9q03ltl5oja63i7b3cm5gv9hc2i7183b2i4yfeyv6okzv9gz1bvx3m9gz0t5wm5kalur1q928jg3zotj5qgvj8v8s0i1un1odcc04qma4giyj3c1zxl9spsikldlysuey9l2g1rh2xd186x5380xtu7uy5ct0v42ocz9zkb4w1j59imcm347fhi8nqs24xe8h6ih5ir2ctnm8wly91p6tzlegp2r7o18sah9u7k5ivc80cjwkvhr96n4iion65jbf0ja9ijfvrvyvw3uh587cftkdsz3o4irc8my9dbbm51srup8ab2od5k8eaq5pihv85i58h2903po9tu8v7jiuk47ehiqbxlxc3hgkwssmjni1b7f64mhl2t428f6lhyynzf7x8rirq2zaesdt3ol5jwml9p0k2uvsghyt8qhjjg61zg66b2y2spq0vivnpa71lm191q8d4swyml3s227z9ih7th2vdgoqsfbfv7hj31wwt41kzwznbg04sge55bu11jznxioqn91tp6ttjlf9cudmp4h5feo3j8c9o8l6uw1ncpu02dhseaujzq01hajjl2xord2zwst2kdfs6wuork9a5rdv0b0fzsuc64n6u64ecm5tfoha1fkmiaj4ltaie84h86662z1gutku9cbi72t2uz5d6052lvzi9hu6a870cn49uvoqhgs3vny9ix20kjz4swpprqgpkbeu9v1443qvn3ex1z2cvf5troehrl24645yv5ow7ym3s0eavqcunak64d46xni72867gfvcodh8x2mzoeq2lsiszzyll1uf17r9ea8do5dbaskhvijlwnzswakpaw6lz2ep8gi == \y\v\c\l\v\u\2\z\s\8\4\r\l\n\m\8\o\7\8\0\t\c\y\b\4\z\y\r\5\7\8\c\i\k\7\k\r\j\6\3\w\q\d\v\b\4\o\x\z\m\0\i\k\p\b\9\r\e\a\y\7\m\4\q\0\6\y\h\w\i\5\m\s\x\b\0\v\8\c\i\d\8\8\e\n\b\x\4\0\x\r\d\s\6\z\7\0\t\v\y\n\g\g\p\n\a\8\8\2\y\i\w\q\9\c\s\r\t\s\4\o\k\v\p\i\m\v\j\8\0\r\r\3\g\v\h\3\t\6\s\j\n\y\w\m\l\p\9\z\v\7\x\l\w\5\f\3\q\s\m\j\c\v\n\y\l\8\a\h\0\f\b\9\q\0\3\l\t\l\5\o\j\a\6\3\i\7\b\3\c\m\5\g\v\9\h\c\2\i\7\1\8\3\b\2\i\4\y\f\e\y\v\6\o\k\z\v\9\g\z\1\b\v\x\3\m\9\g\z\0\t\5\w\m\5\k\a\l\u\r\1\q\9\2\8\j\g\3\z\o\t\j\5\q\g\v\j\8\v\8\s\0\i\1\u\n\1\o\d\c\c\0\4\q\m\a\4\g\i\y\j\3\c\1\z\x\l\9\s\p\s\i\k\l\d\l\y\s\u\e\y\9\l\2\g\1\r\h\2\x\d\1\8\6\x\5\3\8\0\x\t\u\7\u\y\5\c\t\0\v\4\2\o\c\z\9\z\k\b\4\w\1\j\5\9\i\m\c\m\3\4\7\f\h\i\8\n\q\s\2\4\x\e\8\h\6\i\h\5\i\r\2\c\t\n\m\8\w\l\y\9\1\p\6\t\z\l\e\g\p\2\r\7\o\1\8\s\a\h\9\u\7\k\5\i\v\c\8\0\c\j\w\k\v\h\r\9\6\n\4\i\i\o\n\6\5\j\b\f\0\j\a\9\i\j\f\v\r\v\y\v\w\3\u\h\5\8\7\c\f\t\k\d\s\z\3\o\4\i\r\c\8\m\y\9\d\b\b\m\5\1\s\r\u\p\8\a\b\2\o\d\5\k\8\e\a\q\5\p\i\h\v\8\5\i\5\8\h\2\9\0\3\p\o\9\t\u\8\v\7\j\i\u\k\4\7\e\h\i\q\b\x\l\x\c\3\h\g\k\w\s\s\m\j\n\i\1\b\7\f\6\4\m\h\l\2\t\4\2\8\f\6\l\h\y\y\n\z\f\7\x\8\r\i\r\q\2\z\a\e\s\d\t\3\o\l\5\j\w\m\l\9\p\0\k\2\u\v\s\g\h\y\t\8\q\h\j\j\g\6\1\z\g\6\6\b\2\y\2\s\p\q\0\v\i\v\n\p\a\7\1\l\m\1\9\1\q\8\d\4\s\w\y\m\l\3\s\2\2\7\z\9\i\h\7\t\h\2\v\d\g\o\q\s\f\b\f\v\7\h\j\3\1\w\w\t\4\1\k\z\w\z\n\b\g\0\4\s\g\e\5\5\b\u\1\1\j\z\n\x\i\o\q\n\9\1\t\p\6\t\t\j\l\f\9\c\u\d\m\p\4\h\5\f\e\o\3\j\8\c\9\o\8\l\6\u\w\1\n\c\p\u\0\2\d\h\s\e\a\u\j\z\q\0\1\h\a\j\j\l\2\x\o\r\d\2\z\w\s\t\2\k\d\f\s\6\w\u\o\r\k\9\a\5\r\d\v\0\b\0\f\z\s\u\c\6\4\n\6\u\6\4\e\c\m\5\t\f\o\h\a\1\f\k\m\i\a\j\4\l\t\a\i\e\8\4\h\8\6\6\6\2\z\1\g\u\t\k\u\9\c\b\i\7\2\t\2\u\z\5\d\6\0\5\2\l\v\z\i\9\h\u\6\a\8\7\0\c\n\4\9\u\v\o\q\h\g\s\3\v\n\y\9\i\x\2\0\k\j\z\4\s\w\p\p\r\q\g\p\k\b\e\u\9\v\1\4\4\3\q\v\n\3\e\x\1\z\2\c\v\f\5\t\r\o\e\h\r\l\2\4\6\4\5\y\v\5\o\w\7\y\m\3\s\0\e\a\v\q\c\u\n\a\k\6\4\d\4\6\x\n\i\7\2\8\6\7\g\f\v\c\o\d\h\8\x\2\m\z\o\e\q\2\l\s\i\s\z\z\y\l\l\1\u\f\1\7\r\9\e\a\8\d\o\5\d\b\a\s\k\h\v\i\j\l\w\n\z\s\w\a\k\p\a\w\6\l\z\2\e\p\8\g\i ]] 00:09:39.145 13:57:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:39.145 13:57:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ yvclvu2zs84rlnm8o780tcyb4zyr578cik7krj63wqdvb4oxzm0ikpb9reay7m4q06yhwi5msxb0v8cid88enbx40xrds6z70tvynggpna882yiwq9csrts4okvpimvj80rr3gvh3t6sjnywmlp9zv7xlw5f3qsmjcvnyl8ah0fb9q03ltl5oja63i7b3cm5gv9hc2i7183b2i4yfeyv6okzv9gz1bvx3m9gz0t5wm5kalur1q928jg3zotj5qgvj8v8s0i1un1odcc04qma4giyj3c1zxl9spsikldlysuey9l2g1rh2xd186x5380xtu7uy5ct0v42ocz9zkb4w1j59imcm347fhi8nqs24xe8h6ih5ir2ctnm8wly91p6tzlegp2r7o18sah9u7k5ivc80cjwkvhr96n4iion65jbf0ja9ijfvrvyvw3uh587cftkdsz3o4irc8my9dbbm51srup8ab2od5k8eaq5pihv85i58h2903po9tu8v7jiuk47ehiqbxlxc3hgkwssmjni1b7f64mhl2t428f6lhyynzf7x8rirq2zaesdt3ol5jwml9p0k2uvsghyt8qhjjg61zg66b2y2spq0vivnpa71lm191q8d4swyml3s227z9ih7th2vdgoqsfbfv7hj31wwt41kzwznbg04sge55bu11jznxioqn91tp6ttjlf9cudmp4h5feo3j8c9o8l6uw1ncpu02dhseaujzq01hajjl2xord2zwst2kdfs6wuork9a5rdv0b0fzsuc64n6u64ecm5tfoha1fkmiaj4ltaie84h86662z1gutku9cbi72t2uz5d6052lvzi9hu6a870cn49uvoqhgs3vny9ix20kjz4swpprqgpkbeu9v1443qvn3ex1z2cvf5troehrl24645yv5ow7ym3s0eavqcunak64d46xni72867gfvcodh8x2mzoeq2lsiszzyll1uf17r9ea8do5dbaskhvijlwnzswakpaw6lz2ep8gi == \y\v\c\l\v\u\2\z\s\8\4\r\l\n\m\8\o\7\8\0\t\c\y\b\4\z\y\r\5\7\8\c\i\k\7\k\r\j\6\3\w\q\d\v\b\4\o\x\z\m\0\i\k\p\b\9\r\e\a\y\7\m\4\q\0\6\y\h\w\i\5\m\s\x\b\0\v\8\c\i\d\8\8\e\n\b\x\4\0\x\r\d\s\6\z\7\0\t\v\y\n\g\g\p\n\a\8\8\2\y\i\w\q\9\c\s\r\t\s\4\o\k\v\p\i\m\v\j\8\0\r\r\3\g\v\h\3\t\6\s\j\n\y\w\m\l\p\9\z\v\7\x\l\w\5\f\3\q\s\m\j\c\v\n\y\l\8\a\h\0\f\b\9\q\0\3\l\t\l\5\o\j\a\6\3\i\7\b\3\c\m\5\g\v\9\h\c\2\i\7\1\8\3\b\2\i\4\y\f\e\y\v\6\o\k\z\v\9\g\z\1\b\v\x\3\m\9\g\z\0\t\5\w\m\5\k\a\l\u\r\1\q\9\2\8\j\g\3\z\o\t\j\5\q\g\v\j\8\v\8\s\0\i\1\u\n\1\o\d\c\c\0\4\q\m\a\4\g\i\y\j\3\c\1\z\x\l\9\s\p\s\i\k\l\d\l\y\s\u\e\y\9\l\2\g\1\r\h\2\x\d\1\8\6\x\5\3\8\0\x\t\u\7\u\y\5\c\t\0\v\4\2\o\c\z\9\z\k\b\4\w\1\j\5\9\i\m\c\m\3\4\7\f\h\i\8\n\q\s\2\4\x\e\8\h\6\i\h\5\i\r\2\c\t\n\m\8\w\l\y\9\1\p\6\t\z\l\e\g\p\2\r\7\o\1\8\s\a\h\9\u\7\k\5\i\v\c\8\0\c\j\w\k\v\h\r\9\6\n\4\i\i\o\n\6\5\j\b\f\0\j\a\9\i\j\f\v\r\v\y\v\w\3\u\h\5\8\7\c\f\t\k\d\s\z\3\o\4\i\r\c\8\m\y\9\d\b\b\m\5\1\s\r\u\p\8\a\b\2\o\d\5\k\8\e\a\q\5\p\i\h\v\8\5\i\5\8\h\2\9\0\3\p\o\9\t\u\8\v\7\j\i\u\k\4\7\e\h\i\q\b\x\l\x\c\3\h\g\k\w\s\s\m\j\n\i\1\b\7\f\6\4\m\h\l\2\t\4\2\8\f\6\l\h\y\y\n\z\f\7\x\8\r\i\r\q\2\z\a\e\s\d\t\3\o\l\5\j\w\m\l\9\p\0\k\2\u\v\s\g\h\y\t\8\q\h\j\j\g\6\1\z\g\6\6\b\2\y\2\s\p\q\0\v\i\v\n\p\a\7\1\l\m\1\9\1\q\8\d\4\s\w\y\m\l\3\s\2\2\7\z\9\i\h\7\t\h\2\v\d\g\o\q\s\f\b\f\v\7\h\j\3\1\w\w\t\4\1\k\z\w\z\n\b\g\0\4\s\g\e\5\5\b\u\1\1\j\z\n\x\i\o\q\n\9\1\t\p\6\t\t\j\l\f\9\c\u\d\m\p\4\h\5\f\e\o\3\j\8\c\9\o\8\l\6\u\w\1\n\c\p\u\0\2\d\h\s\e\a\u\j\z\q\0\1\h\a\j\j\l\2\x\o\r\d\2\z\w\s\t\2\k\d\f\s\6\w\u\o\r\k\9\a\5\r\d\v\0\b\0\f\z\s\u\c\6\4\n\6\u\6\4\e\c\m\5\t\f\o\h\a\1\f\k\m\i\a\j\4\l\t\a\i\e\8\4\h\8\6\6\6\2\z\1\g\u\t\k\u\9\c\b\i\7\2\t\2\u\z\5\d\6\0\5\2\l\v\z\i\9\h\u\6\a\8\7\0\c\n\4\9\u\v\o\q\h\g\s\3\v\n\y\9\i\x\2\0\k\j\z\4\s\w\p\p\r\q\g\p\k\b\e\u\9\v\1\4\4\3\q\v\n\3\e\x\1\z\2\c\v\f\5\t\r\o\e\h\r\l\2\4\6\4\5\y\v\5\o\w\7\y\m\3\s\0\e\a\v\q\c\u\n\a\k\6\4\d\4\6\x\n\i\7\2\8\6\7\g\f\v\c\o\d\h\8\x\2\m\z\o\e\q\2\l\s\i\s\z\z\y\l\l\1\u\f\1\7\r\9\e\a\8\d\o\5\d\b\a\s\k\h\v\i\j\l\w\n\z\s\w\a\k\p\a\w\6\l\z\2\e\p\8\g\i ]] 00:09:39.145 13:57:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:39.403 13:57:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:39.403 13:57:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:39.403 13:57:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:39.403 13:57:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:39.403 [2024-07-25 13:57:48.571735] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:39.403 [2024-07-25 13:57:48.571874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63426 ] 00:09:39.403 { 00:09:39.403 "subsystems": [ 00:09:39.403 { 00:09:39.403 "subsystem": "bdev", 00:09:39.403 "config": [ 00:09:39.403 { 00:09:39.403 "params": { 00:09:39.403 "block_size": 512, 00:09:39.403 "num_blocks": 1048576, 00:09:39.403 "name": "malloc0" 00:09:39.403 }, 00:09:39.403 "method": "bdev_malloc_create" 00:09:39.403 }, 00:09:39.403 { 00:09:39.403 "params": { 00:09:39.403 "filename": "/dev/zram1", 00:09:39.403 "name": "uring0" 00:09:39.403 }, 00:09:39.403 "method": "bdev_uring_create" 00:09:39.403 }, 00:09:39.403 { 00:09:39.403 "method": "bdev_wait_for_examine" 00:09:39.403 } 00:09:39.403 ] 00:09:39.403 } 00:09:39.403 ] 00:09:39.403 } 00:09:39.661 [2024-07-25 13:57:48.710997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.661 [2024-07-25 13:57:48.815479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.661 [2024-07-25 13:57:48.859377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:43.140  Copying: 178/512 [MB] (178 MBps) Copying: 359/512 [MB] (181 MBps) Copying: 512/512 [MB] (average 180 MBps) 00:09:43.140 00:09:43.140 13:57:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:43.140 13:57:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:43.140 13:57:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:43.140 13:57:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:43.140 13:57:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:43.140 13:57:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:43.140 13:57:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:43.140 13:57:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:43.140 [2024-07-25 13:57:52.247033] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:43.140 [2024-07-25 13:57:52.247111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63477 ] 00:09:43.140 { 00:09:43.140 "subsystems": [ 00:09:43.140 { 00:09:43.140 "subsystem": "bdev", 00:09:43.140 "config": [ 00:09:43.140 { 00:09:43.140 "params": { 00:09:43.140 "block_size": 512, 00:09:43.140 "num_blocks": 1048576, 00:09:43.140 "name": "malloc0" 00:09:43.140 }, 00:09:43.140 "method": "bdev_malloc_create" 00:09:43.140 }, 00:09:43.140 { 00:09:43.140 "params": { 00:09:43.140 "filename": "/dev/zram1", 00:09:43.140 "name": "uring0" 00:09:43.140 }, 00:09:43.140 "method": "bdev_uring_create" 00:09:43.140 }, 00:09:43.140 { 00:09:43.140 "params": { 00:09:43.140 "name": "uring0" 00:09:43.140 }, 00:09:43.140 "method": "bdev_uring_delete" 00:09:43.140 }, 00:09:43.140 { 00:09:43.140 "method": "bdev_wait_for_examine" 00:09:43.140 } 00:09:43.140 ] 00:09:43.140 } 00:09:43.140 ] 00:09:43.140 } 00:09:43.399 [2024-07-25 13:57:52.673463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.657 [2024-07-25 13:57:52.787341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.657 [2024-07-25 13:57:52.830899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:44.185  Copying: 0/0 [B] (average 0 Bps) 00:09:44.185 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:44.185 13:57:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:44.185 [2024-07-25 13:57:53.408312] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:44.185 [2024-07-25 13:57:53.408395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63518 ] 00:09:44.185 { 00:09:44.185 "subsystems": [ 00:09:44.185 { 00:09:44.185 "subsystem": "bdev", 00:09:44.185 "config": [ 00:09:44.185 { 00:09:44.185 "params": { 00:09:44.185 "block_size": 512, 00:09:44.185 "num_blocks": 1048576, 00:09:44.185 "name": "malloc0" 00:09:44.185 }, 00:09:44.185 "method": "bdev_malloc_create" 00:09:44.185 }, 00:09:44.185 { 00:09:44.185 "params": { 00:09:44.185 "filename": "/dev/zram1", 00:09:44.185 "name": "uring0" 00:09:44.185 }, 00:09:44.185 "method": "bdev_uring_create" 00:09:44.185 }, 00:09:44.185 { 00:09:44.185 "params": { 00:09:44.185 "name": "uring0" 00:09:44.185 }, 00:09:44.185 "method": "bdev_uring_delete" 00:09:44.185 }, 00:09:44.185 { 00:09:44.185 "method": "bdev_wait_for_examine" 00:09:44.185 } 00:09:44.185 ] 00:09:44.185 } 00:09:44.185 ] 00:09:44.185 } 00:09:44.448 [2024-07-25 13:57:53.532851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.448 [2024-07-25 13:57:53.639716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.448 [2024-07-25 13:57:53.683742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:44.707 [2024-07-25 13:57:53.851796] bdev.c:8187:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:44.707 [2024-07-25 13:57:53.851845] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:44.707 [2024-07-25 13:57:53.851852] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:09:44.707 [2024-07-25 13:57:53.851860] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:44.967 [2024-07-25 13:57:54.103174] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:44.967 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:09:44.967 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:44.967 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:09:44.967 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:09:44.967 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:09:44.967 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:44.967 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:44.967 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:44.967 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:44.967 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:44.967 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:44.967 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:45.226 00:09:45.226 real 0m13.757s 00:09:45.226 user 0m9.425s 00:09:45.226 sys 0m10.720s 00:09:45.226 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.226 13:57:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:45.226 ************************************ 00:09:45.226 END TEST dd_uring_copy 00:09:45.226 ************************************ 00:09:45.226 00:09:45.226 real 0m13.889s 00:09:45.226 user 0m9.468s 00:09:45.226 sys 0m10.813s 00:09:45.226 ************************************ 00:09:45.226 END TEST spdk_dd_uring 00:09:45.226 ************************************ 00:09:45.226 13:57:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.226 13:57:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:45.226 13:57:54 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:45.226 13:57:54 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:45.226 13:57:54 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.226 13:57:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:45.226 ************************************ 00:09:45.226 START TEST spdk_dd_sparse 00:09:45.226 ************************************ 00:09:45.226 13:57:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:45.486 * Looking for test storage... 00:09:45.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:45.486 1+0 records in 00:09:45.486 1+0 records out 00:09:45.486 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00721791 s, 581 MB/s 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:45.486 1+0 records in 00:09:45.486 1+0 records out 00:09:45.486 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0070332 s, 596 MB/s 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:45.486 1+0 records in 00:09:45.486 1+0 records out 00:09:45.486 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0091912 s, 456 MB/s 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:45.486 ************************************ 00:09:45.486 START TEST dd_sparse_file_to_file 00:09:45.486 ************************************ 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:45.486 13:57:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:45.486 [2024-07-25 13:57:54.694887] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:45.486 [2024-07-25 13:57:54.695039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63602 ] 00:09:45.486 { 00:09:45.486 "subsystems": [ 00:09:45.486 { 00:09:45.486 "subsystem": "bdev", 00:09:45.486 "config": [ 00:09:45.486 { 00:09:45.486 "params": { 00:09:45.486 "block_size": 4096, 00:09:45.486 "filename": "dd_sparse_aio_disk", 00:09:45.486 "name": "dd_aio" 00:09:45.486 }, 00:09:45.486 "method": "bdev_aio_create" 00:09:45.486 }, 00:09:45.486 { 00:09:45.486 "params": { 00:09:45.486 "lvs_name": "dd_lvstore", 00:09:45.486 "bdev_name": "dd_aio" 00:09:45.486 }, 00:09:45.486 "method": "bdev_lvol_create_lvstore" 00:09:45.486 }, 00:09:45.486 { 00:09:45.486 "method": "bdev_wait_for_examine" 00:09:45.486 } 00:09:45.486 ] 00:09:45.486 } 00:09:45.486 ] 00:09:45.486 } 00:09:45.745 [2024-07-25 13:57:54.831584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.745 [2024-07-25 13:57:54.932213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.745 [2024-07-25 13:57:54.975874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:46.005  Copying: 12/36 [MB] (average 750 MBps) 00:09:46.005 00:09:46.005 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:46.005 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:46.005 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:46.005 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:46.005 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:46.005 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:46.005 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:46.005 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:46.005 ************************************ 00:09:46.005 END TEST dd_sparse_file_to_file 00:09:46.005 ************************************ 00:09:46.005 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:46.005 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:46.005 00:09:46.005 real 0m0.665s 00:09:46.005 user 0m0.418s 00:09:46.005 sys 0m0.316s 00:09:46.005 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.005 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:46.265 ************************************ 00:09:46.265 START TEST dd_sparse_file_to_bdev 00:09:46.265 ************************************ 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:46.265 13:57:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:46.265 [2024-07-25 13:57:55.403089] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:46.265 [2024-07-25 13:57:55.403224] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63650 ] 00:09:46.265 { 00:09:46.265 "subsystems": [ 00:09:46.265 { 00:09:46.265 "subsystem": "bdev", 00:09:46.265 "config": [ 00:09:46.265 { 00:09:46.265 "params": { 00:09:46.265 "block_size": 4096, 00:09:46.265 "filename": "dd_sparse_aio_disk", 00:09:46.265 "name": "dd_aio" 00:09:46.265 }, 00:09:46.265 "method": "bdev_aio_create" 00:09:46.265 }, 00:09:46.265 { 00:09:46.265 "params": { 00:09:46.265 "lvs_name": "dd_lvstore", 00:09:46.265 "lvol_name": "dd_lvol", 00:09:46.265 "size_in_mib": 36, 00:09:46.265 "thin_provision": true 00:09:46.265 }, 00:09:46.265 "method": "bdev_lvol_create" 00:09:46.265 }, 00:09:46.265 { 00:09:46.265 "method": "bdev_wait_for_examine" 00:09:46.265 } 00:09:46.265 ] 00:09:46.265 } 00:09:46.265 ] 00:09:46.265 } 00:09:46.265 [2024-07-25 13:57:55.543537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.525 [2024-07-25 13:57:55.659092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.525 [2024-07-25 13:57:55.704921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:46.785  Copying: 12/36 [MB] (average 600 MBps) 00:09:46.785 00:09:46.785 00:09:46.785 real 0m0.645s 00:09:46.785 user 0m0.433s 00:09:46.785 sys 0m0.304s 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:46.785 ************************************ 00:09:46.785 END TEST dd_sparse_file_to_bdev 00:09:46.785 ************************************ 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:46.785 ************************************ 00:09:46.785 START TEST dd_sparse_bdev_to_file 00:09:46.785 ************************************ 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:46.785 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:47.044 [2024-07-25 13:57:56.101940] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:47.044 [2024-07-25 13:57:56.102075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63688 ] 00:09:47.044 { 00:09:47.044 "subsystems": [ 00:09:47.044 { 00:09:47.044 "subsystem": "bdev", 00:09:47.044 "config": [ 00:09:47.044 { 00:09:47.044 "params": { 00:09:47.044 "block_size": 4096, 00:09:47.044 "filename": "dd_sparse_aio_disk", 00:09:47.044 "name": "dd_aio" 00:09:47.044 }, 00:09:47.044 "method": "bdev_aio_create" 00:09:47.044 }, 00:09:47.044 { 00:09:47.044 "method": "bdev_wait_for_examine" 00:09:47.044 } 00:09:47.044 ] 00:09:47.044 } 00:09:47.044 ] 00:09:47.044 } 00:09:47.044 [2024-07-25 13:57:56.231294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.044 [2024-07-25 13:57:56.331825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.303 [2024-07-25 13:57:56.375659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:47.563  Copying: 12/36 [MB] (average 1333 MBps) 00:09:47.563 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:47.563 00:09:47.563 real 0m0.638s 00:09:47.563 user 0m0.415s 00:09:47.563 sys 0m0.293s 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.563 ************************************ 00:09:47.563 END TEST dd_sparse_bdev_to_file 00:09:47.563 ************************************ 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:47.563 ************************************ 00:09:47.563 END TEST spdk_dd_sparse 00:09:47.563 ************************************ 00:09:47.563 00:09:47.563 real 0m2.262s 00:09:47.563 user 0m1.375s 00:09:47.563 sys 0m1.123s 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.563 13:57:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:47.563 13:57:56 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:47.563 13:57:56 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:47.563 13:57:56 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.563 13:57:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:47.563 ************************************ 00:09:47.563 START TEST spdk_dd_negative 00:09:47.563 ************************************ 00:09:47.563 13:57:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:47.822 * Looking for test storage... 00:09:47.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:47.822 ************************************ 00:09:47.822 START TEST dd_invalid_arguments 00:09:47.822 ************************************ 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:47.822 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:47.822 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:47.822 00:09:47.822 CPU options: 00:09:47.822 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:47.822 (like [0,1,10]) 00:09:47.822 --lcores lcore to CPU mapping list. The list is in the format: 00:09:47.822 [<,lcores[@CPUs]>...] 00:09:47.822 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:47.822 Within the group, '-' is used for range separator, 00:09:47.822 ',' is used for single number separator. 00:09:47.822 '( )' can be omitted for single element group, 00:09:47.822 '@' can be omitted if cpus and lcores have the same value 00:09:47.822 --disable-cpumask-locks Disable CPU core lock files. 00:09:47.823 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:47.823 pollers in the app support interrupt mode) 00:09:47.823 -p, --main-core main (primary) core for DPDK 00:09:47.823 00:09:47.823 Configuration options: 00:09:47.823 -c, --config, --json JSON config file 00:09:47.823 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:47.823 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:47.823 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:47.823 --rpcs-allowed comma-separated list of permitted RPCS 00:09:47.823 --json-ignore-init-errors don't exit on invalid config entry 00:09:47.823 00:09:47.823 Memory options: 00:09:47.823 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:47.823 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:47.823 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:47.823 -R, --huge-unlink unlink huge files after initialization 00:09:47.823 -n, --mem-channels number of memory channels used for DPDK 00:09:47.823 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:47.823 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:47.823 --no-huge run without using hugepages 00:09:47.823 -i, --shm-id shared memory ID (optional) 00:09:47.823 -g, --single-file-segments force creating just one hugetlbfs file 00:09:47.823 00:09:47.823 PCI options: 00:09:47.823 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:47.823 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:47.823 -u, --no-pci disable PCI access 00:09:47.823 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:47.823 00:09:47.823 Log options: 00:09:47.823 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:47.823 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:47.823 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:47.823 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:47.823 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:09:47.823 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:09:47.823 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:09:47.823 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:09:47.823 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:09:47.823 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:09:47.823 virtio_vfio_user, vmd) 00:09:47.823 --silence-noticelog disable notice level logging to stderr 00:09:47.823 00:09:47.823 Trace options: 00:09:47.823 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:47.823 setting 0 to disable trace (default 32768) 00:09:47.823 Tracepoints vary in size and can use more than one trace entry. 00:09:47.823 -e, --tpoint-group [:] 00:09:47.823 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:09:47.823 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:47.823 [2024-07-25 13:57:56.976488] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:09:47.823 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:09:47.823 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:47.823 a tracepoint group. First tpoint inside a group can be enabled by 00:09:47.823 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:47.823 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:47.823 in /include/spdk_internal/trace_defs.h 00:09:47.823 00:09:47.823 Other options: 00:09:47.823 -h, --help show this usage 00:09:47.823 -v, --version print SPDK version 00:09:47.823 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:47.823 --env-context Opaque context for use of the env implementation 00:09:47.823 00:09:47.823 Application specific: 00:09:47.823 [--------- DD Options ---------] 00:09:47.823 --if Input file. Must specify either --if or --ib. 00:09:47.823 --ib Input bdev. Must specifier either --if or --ib 00:09:47.823 --of Output file. Must specify either --of or --ob. 00:09:47.823 --ob Output bdev. Must specify either --of or --ob. 00:09:47.823 --iflag Input file flags. 00:09:47.823 --oflag Output file flags. 00:09:47.823 --bs I/O unit size (default: 4096) 00:09:47.823 --qd Queue depth (default: 2) 00:09:47.823 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:47.823 --skip Skip this many I/O units at start of input. (default: 0) 00:09:47.823 --seek Skip this many I/O units at start of output. (default: 0) 00:09:47.823 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:47.823 --sparse Enable hole skipping in input target 00:09:47.823 Available iflag and oflag values: 00:09:47.823 append - append mode 00:09:47.823 direct - use direct I/O for data 00:09:47.823 directory - fail unless a directory 00:09:47.823 dsync - use synchronized I/O for data 00:09:47.823 noatime - do not update access time 00:09:47.823 noctty - do not assign controlling terminal from file 00:09:47.823 nofollow - do not follow symlinks 00:09:47.823 nonblock - use non-blocking I/O 00:09:47.823 sync - use synchronized I/O for data and metadata 00:09:47.823 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:09:47.823 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:47.823 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:47.823 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:47.823 00:09:47.823 real 0m0.058s 00:09:47.823 user 0m0.034s 00:09:47.823 sys 0m0.022s 00:09:47.823 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.823 13:57:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:47.823 ************************************ 00:09:47.823 END TEST dd_invalid_arguments 00:09:47.823 ************************************ 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:47.823 ************************************ 00:09:47.823 START TEST dd_double_input 00:09:47.823 ************************************ 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:47.823 [2024-07-25 13:57:57.081988] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:47.823 00:09:47.823 real 0m0.062s 00:09:47.823 user 0m0.034s 00:09:47.823 sys 0m0.026s 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.823 13:57:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:47.823 ************************************ 00:09:47.823 END TEST dd_double_input 00:09:47.823 ************************************ 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:48.083 ************************************ 00:09:48.083 START TEST dd_double_output 00:09:48.083 ************************************ 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:48.083 [2024-07-25 13:57:57.174136] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:48.083 00:09:48.083 real 0m0.052s 00:09:48.083 user 0m0.031s 00:09:48.083 sys 0m0.020s 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.083 ************************************ 00:09:48.083 END TEST dd_double_output 00:09:48.083 ************************************ 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:48.083 ************************************ 00:09:48.083 START TEST dd_no_input 00:09:48.083 ************************************ 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:48.083 [2024-07-25 13:57:57.280120] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:48.083 00:09:48.083 real 0m0.067s 00:09:48.083 user 0m0.043s 00:09:48.083 sys 0m0.022s 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.083 ************************************ 00:09:48.083 END TEST dd_no_input 00:09:48.083 ************************************ 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:48.083 ************************************ 00:09:48.083 START TEST dd_no_output 00:09:48.083 ************************************ 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:48.083 [2024-07-25 13:57:57.370162] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:09:48.083 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:48.084 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:48.084 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:48.084 00:09:48.084 real 0m0.049s 00:09:48.084 user 0m0.028s 00:09:48.084 sys 0m0.021s 00:09:48.084 ************************************ 00:09:48.084 END TEST dd_no_output 00:09:48.084 ************************************ 00:09:48.084 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.084 13:57:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:48.342 13:57:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:48.343 ************************************ 00:09:48.343 START TEST dd_wrong_blocksize 00:09:48.343 ************************************ 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:48.343 [2024-07-25 13:57:57.482353] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:48.343 00:09:48.343 real 0m0.067s 00:09:48.343 user 0m0.040s 00:09:48.343 sys 0m0.026s 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:48.343 ************************************ 00:09:48.343 END TEST dd_wrong_blocksize 00:09:48.343 ************************************ 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:48.343 ************************************ 00:09:48.343 START TEST dd_smaller_blocksize 00:09:48.343 ************************************ 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:48.343 13:57:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:48.343 [2024-07-25 13:57:57.607143] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:48.343 [2024-07-25 13:57:57.607216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63901 ] 00:09:48.613 [2024-07-25 13:57:57.734915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.613 [2024-07-25 13:57:57.840063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.613 [2024-07-25 13:57:57.883182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:48.874 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:48.874 [2024-07-25 13:57:58.150605] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:48.874 [2024-07-25 13:57:58.150676] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:49.133 [2024-07-25 13:57:58.245931] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:49.133 00:09:49.133 real 0m0.791s 00:09:49.133 user 0m0.362s 00:09:49.133 sys 0m0.323s 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:49.133 ************************************ 00:09:49.133 END TEST dd_smaller_blocksize 00:09:49.133 ************************************ 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:49.133 ************************************ 00:09:49.133 START TEST dd_invalid_count 00:09:49.133 ************************************ 00:09:49.133 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:09:49.134 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:49.134 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:09:49.134 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:49.134 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.134 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.134 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.134 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.134 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.134 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.134 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.134 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:49.134 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:49.393 [2024-07-25 13:57:58.457154] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:09:49.393 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:49.394 00:09:49.394 real 0m0.071s 00:09:49.394 user 0m0.042s 00:09:49.394 sys 0m0.028s 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:49.394 ************************************ 00:09:49.394 END TEST dd_invalid_count 00:09:49.394 ************************************ 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:49.394 ************************************ 00:09:49.394 START TEST dd_invalid_oflag 00:09:49.394 ************************************ 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:49.394 [2024-07-25 13:57:58.562042] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:49.394 00:09:49.394 real 0m0.067s 00:09:49.394 user 0m0.037s 00:09:49.394 sys 0m0.029s 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:49.394 ************************************ 00:09:49.394 END TEST dd_invalid_oflag 00:09:49.394 ************************************ 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:49.394 ************************************ 00:09:49.394 START TEST dd_invalid_iflag 00:09:49.394 ************************************ 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:49.394 [2024-07-25 13:57:58.666255] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:49.394 00:09:49.394 real 0m0.059s 00:09:49.394 user 0m0.031s 00:09:49.394 sys 0m0.026s 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.394 13:57:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:49.394 ************************************ 00:09:49.394 END TEST dd_invalid_iflag 00:09:49.394 ************************************ 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:49.653 ************************************ 00:09:49.653 START TEST dd_unknown_flag 00:09:49.653 ************************************ 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:49.653 13:57:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:49.653 [2024-07-25 13:57:58.762605] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:49.653 [2024-07-25 13:57:58.763018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63993 ] 00:09:49.653 [2024-07-25 13:57:58.902873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.913 [2024-07-25 13:57:59.008866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.913 [2024-07-25 13:57:59.051731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:49.913 [2024-07-25 13:57:59.080056] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:49.913 [2024-07-25 13:57:59.080116] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:49.913 [2024-07-25 13:57:59.080161] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:49.913 [2024-07-25 13:57:59.080168] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:49.913 [2024-07-25 13:57:59.080381] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:49.913 [2024-07-25 13:57:59.080397] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:49.913 [2024-07-25 13:57:59.080440] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:49.913 [2024-07-25 13:57:59.080452] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:49.913 [2024-07-25 13:57:59.174302] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:50.174 13:57:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:09:50.174 13:57:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:50.174 13:57:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:09:50.174 13:57:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:09:50.174 13:57:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:09:50.174 13:57:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:50.174 00:09:50.174 real 0m0.549s 00:09:50.174 user 0m0.328s 00:09:50.174 sys 0m0.129s 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:50.175 ************************************ 00:09:50.175 END TEST dd_unknown_flag 00:09:50.175 ************************************ 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:50.175 ************************************ 00:09:50.175 START TEST dd_invalid_json 00:09:50.175 ************************************ 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:50.175 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:50.175 [2024-07-25 13:57:59.384260] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:50.175 [2024-07-25 13:57:59.384343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64027 ] 00:09:50.454 [2024-07-25 13:57:59.523708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.454 [2024-07-25 13:57:59.626005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.454 [2024-07-25 13:57:59.626070] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:50.454 [2024-07-25 13:57:59.626079] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:50.454 [2024-07-25 13:57:59.626086] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:50.454 [2024-07-25 13:57:59.626119] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:50.454 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:09:50.454 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:50.454 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:09:50.454 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:09:50.454 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:09:50.454 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:50.454 00:09:50.454 real 0m0.397s 00:09:50.454 user 0m0.225s 00:09:50.454 sys 0m0.070s 00:09:50.454 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.454 13:57:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:50.454 ************************************ 00:09:50.454 END TEST dd_invalid_json 00:09:50.454 ************************************ 00:09:50.714 00:09:50.714 real 0m2.962s 00:09:50.714 user 0m1.450s 00:09:50.714 sys 0m1.201s 00:09:50.714 13:57:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.714 13:57:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:50.714 ************************************ 00:09:50.714 END TEST spdk_dd_negative 00:09:50.714 ************************************ 00:09:50.714 00:09:50.714 real 1m13.299s 00:09:50.714 user 0m47.637s 00:09:50.714 sys 0m29.573s 00:09:50.714 13:57:59 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.714 13:57:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:50.714 ************************************ 00:09:50.714 END TEST spdk_dd 00:09:50.714 ************************************ 00:09:50.714 13:57:59 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:09:50.714 13:57:59 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:09:50.714 13:57:59 -- spdk/autotest.sh@264 -- # timing_exit lib 00:09:50.714 13:57:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:50.714 13:57:59 -- common/autotest_common.sh@10 -- # set +x 00:09:50.714 13:57:59 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:09:50.714 13:57:59 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:09:50.714 13:57:59 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:09:50.714 13:57:59 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:09:50.714 13:57:59 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:09:50.714 13:57:59 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:09:50.714 13:57:59 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:50.714 13:57:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:50.714 13:57:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.714 13:57:59 -- common/autotest_common.sh@10 -- # set +x 00:09:50.714 ************************************ 00:09:50.714 START TEST nvmf_tcp 00:09:50.714 ************************************ 00:09:50.714 13:57:59 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:50.714 * Looking for test storage... 00:09:50.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:50.714 13:58:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:50.975 13:58:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:50.975 13:58:00 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:50.975 13:58:00 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:50.975 13:58:00 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.975 13:58:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:50.975 ************************************ 00:09:50.975 START TEST nvmf_target_core 00:09:50.975 ************************************ 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:50.975 * Looking for test storage... 00:09:50.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.975 13:58:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.976 ************************************ 00:09:50.976 START TEST nvmf_host_management 00:09:50.976 ************************************ 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:50.976 * Looking for test storage... 00:09:50.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:50.976 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:50.977 Cannot find device "nvmf_init_br" 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:50.977 Cannot find device "nvmf_tgt_br" 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.977 Cannot find device "nvmf_tgt_br2" 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:09:50.977 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:51.239 Cannot find device "nvmf_init_br" 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:51.239 Cannot find device "nvmf_tgt_br" 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:51.239 Cannot find device "nvmf_tgt_br2" 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:51.239 Cannot find device "nvmf_br" 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:51.239 Cannot find device "nvmf_init_if" 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:51.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:51.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:51.239 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:51.240 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:51.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:09:51.505 00:09:51.505 --- 10.0.0.2 ping statistics --- 00:09:51.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.505 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:51.505 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:51.505 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:09:51.505 00:09:51.505 --- 10.0.0.3 ping statistics --- 00:09:51.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.505 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:51.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:09:51.505 00:09:51.505 --- 10.0.0.1 ping statistics --- 00:09:51.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.505 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64311 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64311 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64311 ']' 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.505 13:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:51.505 [2024-07-25 13:58:00.687536] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:51.505 [2024-07-25 13:58:00.688019] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.772 [2024-07-25 13:58:00.816528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.772 [2024-07-25 13:58:00.931589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.772 [2024-07-25 13:58:00.931653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.772 [2024-07-25 13:58:00.931666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.772 [2024-07-25 13:58:00.931673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.772 [2024-07-25 13:58:00.931678] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.772 [2024-07-25 13:58:00.931791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.772 [2024-07-25 13:58:00.931961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.772 [2024-07-25 13:58:00.932068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.772 [2024-07-25 13:58:00.932074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:51.772 [2024-07-25 13:58:00.974807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.348 [2024-07-25 13:58:01.623741] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.348 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.606 Malloc0 00:09:52.606 [2024-07-25 13:58:01.703518] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64365 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64365 /var/tmp/bdevperf.sock 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64365 ']' 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:52.606 { 00:09:52.606 "params": { 00:09:52.606 "name": "Nvme$subsystem", 00:09:52.606 "trtype": "$TEST_TRANSPORT", 00:09:52.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.606 "adrfam": "ipv4", 00:09:52.606 "trsvcid": "$NVMF_PORT", 00:09:52.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.606 "hdgst": ${hdgst:-false}, 00:09:52.606 "ddgst": ${ddgst:-false} 00:09:52.606 }, 00:09:52.606 "method": "bdev_nvme_attach_controller" 00:09:52.606 } 00:09:52.606 EOF 00:09:52.606 )") 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:52.606 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:52.606 "params": { 00:09:52.606 "name": "Nvme0", 00:09:52.606 "trtype": "tcp", 00:09:52.606 "traddr": "10.0.0.2", 00:09:52.606 "adrfam": "ipv4", 00:09:52.606 "trsvcid": "4420", 00:09:52.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:52.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:52.606 "hdgst": false, 00:09:52.606 "ddgst": false 00:09:52.606 }, 00:09:52.606 "method": "bdev_nvme_attach_controller" 00:09:52.606 }' 00:09:52.606 [2024-07-25 13:58:01.799929] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:52.606 [2024-07-25 13:58:01.799995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64365 ] 00:09:52.864 [2024-07-25 13:58:01.924559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.864 [2024-07-25 13:58:02.033902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.864 [2024-07-25 13:58:02.086424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:53.123 Running I/O for 10 seconds... 00:09:53.383 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.383 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:53.383 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:53.383 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.383 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.644 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.644 [2024-07-25 13:58:02.771906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.644 [2024-07-25 13:58:02.771954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.644 [2024-07-25 13:58:02.771976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.644 [2024-07-25 13:58:02.771983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.644 [2024-07-25 13:58:02.771992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.644 [2024-07-25 13:58:02.771999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.644 [2024-07-25 13:58:02.772007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.644 [2024-07-25 13:58:02.772013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.644 [2024-07-25 13:58:02.772021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.644 [2024-07-25 13:58:02.772028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.644 [2024-07-25 13:58:02.772036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.644 [2024-07-25 13:58:02.772042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.644 [2024-07-25 13:58:02.772051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.644 [2024-07-25 13:58:02.772057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.644 [2024-07-25 13:58:02.772065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.644 [2024-07-25 13:58:02.772071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.644 [2024-07-25 13:58:02.772080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.644 [2024-07-25 13:58:02.772086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.644 [2024-07-25 13:58:02.772094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.644 [2024-07-25 13:58:02.772100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.645 [2024-07-25 13:58:02.772599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.645 [2024-07-25 13:58:02.772605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.646 [2024-07-25 13:58:02.772925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.772934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5ec0 is same with the state(5) to be set 00:09:53.646 [2024-07-25 13:58:02.772996] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ff5ec0 was disconnected and freed. reset controller. 00:09:53.646 [2024-07-25 13:58:02.773076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:53.646 [2024-07-25 13:58:02.773086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.773094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:53.646 [2024-07-25 13:58:02.773100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.773107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:53.646 [2024-07-25 13:58:02.773114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.773121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:53.646 [2024-07-25 13:58:02.773127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.646 [2024-07-25 13:58:02.773133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd50 is same with the state(5) to be set 00:09:53.646 [2024-07-25 13:58:02.774223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:53.646 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.646 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:53.646 task offset: 122880 on job bdev=Nvme0n1 fails 00:09:53.646 00:09:53.646 Latency(us) 00:09:53.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.646 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:53.646 Job: Nvme0n1 ended in about 0.58 seconds with error 00:09:53.646 Verification LBA range: start 0x0 length 0x400 00:09:53.646 Nvme0n1 : 0.58 1657.58 103.60 110.51 0.00 35310.97 1688.48 34799.90 00:09:53.646 =================================================================================================================== 00:09:53.646 Total : 1657.58 103.60 110.51 0.00 35310.97 1688.48 34799.90 00:09:53.646 [2024-07-25 13:58:02.776514] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:53.646 [2024-07-25 13:58:02.776536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fedd50 (9): Bad file descriptor 00:09:53.646 [2024-07-25 13:58:02.783495] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64365 00:09:54.582 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64365) - No such process 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:54.582 { 00:09:54.582 "params": { 00:09:54.582 "name": "Nvme$subsystem", 00:09:54.582 "trtype": "$TEST_TRANSPORT", 00:09:54.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.582 "adrfam": "ipv4", 00:09:54.582 "trsvcid": "$NVMF_PORT", 00:09:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.582 "hdgst": ${hdgst:-false}, 00:09:54.582 "ddgst": ${ddgst:-false} 00:09:54.582 }, 00:09:54.582 "method": "bdev_nvme_attach_controller" 00:09:54.582 } 00:09:54.582 EOF 00:09:54.582 )") 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:54.582 13:58:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:54.582 "params": { 00:09:54.582 "name": "Nvme0", 00:09:54.582 "trtype": "tcp", 00:09:54.582 "traddr": "10.0.0.2", 00:09:54.582 "adrfam": "ipv4", 00:09:54.582 "trsvcid": "4420", 00:09:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:54.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:54.582 "hdgst": false, 00:09:54.582 "ddgst": false 00:09:54.582 }, 00:09:54.582 "method": "bdev_nvme_attach_controller" 00:09:54.582 }' 00:09:54.582 [2024-07-25 13:58:03.837599] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:54.582 [2024-07-25 13:58:03.837668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64403 ] 00:09:54.841 [2024-07-25 13:58:03.966615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.841 [2024-07-25 13:58:04.088135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.841 [2024-07-25 13:58:04.139560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:55.100 Running I/O for 1 seconds... 00:09:56.058 00:09:56.058 Latency(us) 00:09:56.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.058 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:56.058 Verification LBA range: start 0x0 length 0x400 00:09:56.058 Nvme0n1 : 1.03 1859.07 116.19 0.00 0.00 33814.13 3720.38 33197.28 00:09:56.058 =================================================================================================================== 00:09:56.058 Total : 1859.07 116.19 0.00 0.00 33814.13 3720.38 33197.28 00:09:56.317 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:56.317 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:56.317 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:56.317 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:56.317 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:56.317 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.317 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:56.317 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.318 rmmod nvme_tcp 00:09:56.318 rmmod nvme_fabrics 00:09:56.318 rmmod nvme_keyring 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64311 ']' 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64311 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 64311 ']' 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 64311 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.318 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64311 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:56.577 killing process with pid 64311 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64311' 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 64311 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 64311 00:09:56.577 [2024-07-25 13:58:05.825802] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.577 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.838 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:56.838 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:56.838 00:09:56.838 real 0m5.774s 00:09:56.838 user 0m22.188s 00:09:56.838 sys 0m1.344s 00:09:56.838 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.838 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:56.838 ************************************ 00:09:56.838 END TEST nvmf_host_management 00:09:56.838 ************************************ 00:09:56.838 13:58:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:56.838 13:58:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:56.838 13:58:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.838 13:58:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.838 ************************************ 00:09:56.838 START TEST nvmf_lvol 00:09:56.838 ************************************ 00:09:56.838 13:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:56.838 * Looking for test storage... 00:09:56.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.838 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:56.839 Cannot find device "nvmf_tgt_br" 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:09:56.839 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.098 Cannot find device "nvmf_tgt_br2" 00:09:57.098 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:57.099 Cannot find device "nvmf_tgt_br" 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:57.099 Cannot find device "nvmf_tgt_br2" 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.099 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:57.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:09:57.358 00:09:57.358 --- 10.0.0.2 ping statistics --- 00:09:57.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.358 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:57.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:09:57.358 00:09:57.358 --- 10.0.0.3 ping statistics --- 00:09:57.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.358 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:57.358 00:09:57.358 --- 10.0.0.1 ping statistics --- 00:09:57.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.358 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=64629 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 64629 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 64629 ']' 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.358 13:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:57.358 [2024-07-25 13:58:06.522200] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:09:57.358 [2024-07-25 13:58:06.522269] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.358 [2024-07-25 13:58:06.662458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:57.617 [2024-07-25 13:58:06.768661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.617 [2024-07-25 13:58:06.768713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.617 [2024-07-25 13:58:06.768721] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.617 [2024-07-25 13:58:06.768727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.617 [2024-07-25 13:58:06.768732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.617 [2024-07-25 13:58:06.768909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.617 [2024-07-25 13:58:06.768954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.617 [2024-07-25 13:58:06.768953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.617 [2024-07-25 13:58:06.811692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:58.184 13:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.184 13:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:58.184 13:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:58.184 13:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:58.184 13:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:58.184 13:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.184 13:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:58.442 [2024-07-25 13:58:07.640118] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.442 13:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.701 13:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:58.701 13:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.961 13:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:58.961 13:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:59.218 13:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:59.476 13:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=512ddfde-483b-40fd-953a-313c92bb8a55 00:09:59.476 13:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 512ddfde-483b-40fd-953a-313c92bb8a55 lvol 20 00:09:59.733 13:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=146c4d89-4345-4d2b-9c6d-3573a4db7078 00:09:59.733 13:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:59.991 13:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 146c4d89-4345-4d2b-9c6d-3573a4db7078 00:09:59.991 13:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:00.249 [2024-07-25 13:58:09.477479] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.250 13:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:00.507 13:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=64699 00:10:00.507 13:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:00.507 13:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:01.442 13:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 146c4d89-4345-4d2b-9c6d-3573a4db7078 MY_SNAPSHOT 00:10:01.701 13:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=07f994b0-1a8b-41dc-b547-0f742c822f4b 00:10:01.701 13:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 146c4d89-4345-4d2b-9c6d-3573a4db7078 30 00:10:02.268 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 07f994b0-1a8b-41dc-b547-0f742c822f4b MY_CLONE 00:10:02.269 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=36ecbfe1-1286-42fc-abef-d5850fc22a98 00:10:02.269 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 36ecbfe1-1286-42fc-abef-d5850fc22a98 00:10:02.835 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 64699 00:10:10.947 Initializing NVMe Controllers 00:10:10.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:10.947 Controller IO queue size 128, less than required. 00:10:10.947 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:10.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:10.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:10.947 Initialization complete. Launching workers. 00:10:10.947 ======================================================== 00:10:10.947 Latency(us) 00:10:10.947 Device Information : IOPS MiB/s Average min max 00:10:10.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10208.60 39.88 12544.66 2206.57 59321.83 00:10:10.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10250.10 40.04 12490.87 4656.36 62678.72 00:10:10.947 ======================================================== 00:10:10.947 Total : 20458.70 79.92 12517.71 2206.57 62678.72 00:10:10.947 00:10:10.947 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:10.947 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 146c4d89-4345-4d2b-9c6d-3573a4db7078 00:10:11.206 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 512ddfde-483b-40fd-953a-313c92bb8a55 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.465 rmmod nvme_tcp 00:10:11.465 rmmod nvme_fabrics 00:10:11.465 rmmod nvme_keyring 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 64629 ']' 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 64629 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 64629 ']' 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 64629 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.465 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64629 00:10:11.725 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.725 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.725 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64629' 00:10:11.725 killing process with pid 64629 00:10:11.725 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 64629 00:10:11.725 13:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 64629 00:10:11.725 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.726 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.726 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.726 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.726 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.726 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.726 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.726 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:11.985 00:10:11.985 real 0m15.127s 00:10:11.985 user 1m3.705s 00:10:11.985 sys 0m3.512s 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:11.985 ************************************ 00:10:11.985 END TEST nvmf_lvol 00:10:11.985 ************************************ 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.985 ************************************ 00:10:11.985 START TEST nvmf_lvs_grow 00:10:11.985 ************************************ 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:11.985 * Looking for test storage... 00:10:11.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.985 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:12.245 Cannot find device "nvmf_tgt_br" 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.245 Cannot find device "nvmf_tgt_br2" 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:12.245 Cannot find device "nvmf_tgt_br" 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:12.245 Cannot find device "nvmf_tgt_br2" 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.245 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:12.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:10:12.505 00:10:12.505 --- 10.0.0.2 ping statistics --- 00:10:12.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.505 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:12.505 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.505 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:10:12.505 00:10:12.505 --- 10.0.0.3 ping statistics --- 00:10:12.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.505 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:10:12.505 00:10:12.505 --- 10.0.0.1 ping statistics --- 00:10:12.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.505 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65022 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65022 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 65022 ']' 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.505 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:12.765 [2024-07-25 13:58:21.812595] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:10:12.765 [2024-07-25 13:58:21.812665] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.765 [2024-07-25 13:58:21.950985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.765 [2024-07-25 13:58:22.054861] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.765 [2024-07-25 13:58:22.054930] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.765 [2024-07-25 13:58:22.054938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.765 [2024-07-25 13:58:22.054944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.765 [2024-07-25 13:58:22.054949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.765 [2024-07-25 13:58:22.054971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.023 [2024-07-25 13:58:22.096807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:13.591 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.591 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:13.591 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.591 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:13.591 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:13.591 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.591 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.851 [2024-07-25 13:58:22.927931] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:13.851 ************************************ 00:10:13.851 START TEST lvs_grow_clean 00:10:13.851 ************************************ 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:13.851 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:14.111 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:14.111 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:14.369 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ff2f3043-6317-460b-9f4e-efd4ca48296f 00:10:14.369 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff2f3043-6317-460b-9f4e-efd4ca48296f 00:10:14.369 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:14.369 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:14.369 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:14.369 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ff2f3043-6317-460b-9f4e-efd4ca48296f lvol 150 00:10:14.627 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=80588d24-49a2-4dd5-a448-24c5454d9837 00:10:14.627 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:14.627 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:14.886 [2024-07-25 13:58:24.050364] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:14.886 [2024-07-25 13:58:24.050456] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:14.886 true 00:10:14.886 13:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff2f3043-6317-460b-9f4e-efd4ca48296f 00:10:14.886 13:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:15.145 13:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:15.145 13:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:15.405 13:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80588d24-49a2-4dd5-a448-24c5454d9837 00:10:15.405 13:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:15.663 [2024-07-25 13:58:24.901223] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.663 13:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:15.921 13:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65099 00:10:15.921 13:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:15.922 13:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:15.922 13:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65099 /var/tmp/bdevperf.sock 00:10:15.922 13:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 65099 ']' 00:10:15.922 13:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:15.922 13:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:15.922 13:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:15.922 13:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.922 13:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:15.922 [2024-07-25 13:58:25.218476] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:10:15.922 [2024-07-25 13:58:25.218549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65099 ] 00:10:16.180 [2024-07-25 13:58:25.354816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.180 [2024-07-25 13:58:25.452230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.437 [2024-07-25 13:58:25.493075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:17.003 13:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.003 13:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:17.003 13:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:17.261 Nvme0n1 00:10:17.261 13:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:17.519 [ 00:10:17.519 { 00:10:17.519 "name": "Nvme0n1", 00:10:17.519 "aliases": [ 00:10:17.519 "80588d24-49a2-4dd5-a448-24c5454d9837" 00:10:17.519 ], 00:10:17.519 "product_name": "NVMe disk", 00:10:17.519 "block_size": 4096, 00:10:17.519 "num_blocks": 38912, 00:10:17.519 "uuid": "80588d24-49a2-4dd5-a448-24c5454d9837", 00:10:17.519 "assigned_rate_limits": { 00:10:17.519 "rw_ios_per_sec": 0, 00:10:17.519 "rw_mbytes_per_sec": 0, 00:10:17.519 "r_mbytes_per_sec": 0, 00:10:17.519 "w_mbytes_per_sec": 0 00:10:17.519 }, 00:10:17.519 "claimed": false, 00:10:17.519 "zoned": false, 00:10:17.519 "supported_io_types": { 00:10:17.519 "read": true, 00:10:17.519 "write": true, 00:10:17.519 "unmap": true, 00:10:17.519 "flush": true, 00:10:17.519 "reset": true, 00:10:17.519 "nvme_admin": true, 00:10:17.519 "nvme_io": true, 00:10:17.519 "nvme_io_md": false, 00:10:17.519 "write_zeroes": true, 00:10:17.519 "zcopy": false, 00:10:17.519 "get_zone_info": false, 00:10:17.519 "zone_management": false, 00:10:17.519 "zone_append": false, 00:10:17.519 "compare": true, 00:10:17.519 "compare_and_write": true, 00:10:17.519 "abort": true, 00:10:17.519 "seek_hole": false, 00:10:17.519 "seek_data": false, 00:10:17.519 "copy": true, 00:10:17.519 "nvme_iov_md": false 00:10:17.519 }, 00:10:17.519 "memory_domains": [ 00:10:17.519 { 00:10:17.519 "dma_device_id": "system", 00:10:17.519 "dma_device_type": 1 00:10:17.519 } 00:10:17.519 ], 00:10:17.519 "driver_specific": { 00:10:17.519 "nvme": [ 00:10:17.519 { 00:10:17.519 "trid": { 00:10:17.519 "trtype": "TCP", 00:10:17.519 "adrfam": "IPv4", 00:10:17.519 "traddr": "10.0.0.2", 00:10:17.519 "trsvcid": "4420", 00:10:17.519 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:17.519 }, 00:10:17.519 "ctrlr_data": { 00:10:17.519 "cntlid": 1, 00:10:17.519 "vendor_id": "0x8086", 00:10:17.519 "model_number": "SPDK bdev Controller", 00:10:17.519 "serial_number": "SPDK0", 00:10:17.519 "firmware_revision": "24.09", 00:10:17.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:17.519 "oacs": { 00:10:17.519 "security": 0, 00:10:17.519 "format": 0, 00:10:17.519 "firmware": 0, 00:10:17.519 "ns_manage": 0 00:10:17.519 }, 00:10:17.519 "multi_ctrlr": true, 00:10:17.519 "ana_reporting": false 00:10:17.519 }, 00:10:17.519 "vs": { 00:10:17.519 "nvme_version": "1.3" 00:10:17.519 }, 00:10:17.519 "ns_data": { 00:10:17.519 "id": 1, 00:10:17.519 "can_share": true 00:10:17.519 } 00:10:17.519 } 00:10:17.519 ], 00:10:17.519 "mp_policy": "active_passive" 00:10:17.519 } 00:10:17.519 } 00:10:17.519 ] 00:10:17.519 13:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65123 00:10:17.519 13:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:17.519 13:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:17.519 Running I/O for 10 seconds... 00:10:18.454 Latency(us) 00:10:18.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.454 Nvme0n1 : 1.00 9779.00 38.20 0.00 0.00 0.00 0.00 0.00 00:10:18.454 =================================================================================================================== 00:10:18.454 Total : 9779.00 38.20 0.00 0.00 0.00 0.00 0.00 00:10:18.454 00:10:19.462 13:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ff2f3043-6317-460b-9f4e-efd4ca48296f 00:10:19.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.462 Nvme0n1 : 2.00 9652.00 37.70 0.00 0.00 0.00 0.00 0.00 00:10:19.462 =================================================================================================================== 00:10:19.462 Total : 9652.00 37.70 0.00 0.00 0.00 0.00 0.00 00:10:19.462 00:10:19.720 true 00:10:19.720 13:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:19.720 13:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff2f3043-6317-460b-9f4e-efd4ca48296f 00:10:19.977 13:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:19.978 13:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:19.978 13:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65123 00:10:20.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.543 Nvme0n1 : 3.00 9525.00 37.21 0.00 0.00 0.00 0.00 0.00 00:10:20.543 =================================================================================================================== 00:10:20.543 Total : 9525.00 37.21 0.00 0.00 0.00 0.00 0.00 00:10:20.543 00:10:21.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.480 Nvme0n1 : 4.00 9493.25 37.08 0.00 0.00 0.00 0.00 0.00 00:10:21.480 =================================================================================================================== 00:10:21.480 Total : 9493.25 37.08 0.00 0.00 0.00 0.00 0.00 00:10:21.480 00:10:22.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.416 Nvme0n1 : 5.00 9423.40 36.81 0.00 0.00 0.00 0.00 0.00 00:10:22.416 =================================================================================================================== 00:10:22.416 Total : 9423.40 36.81 0.00 0.00 0.00 0.00 0.00 00:10:22.416 00:10:23.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.793 Nvme0n1 : 6.00 9398.00 36.71 0.00 0.00 0.00 0.00 0.00 00:10:23.793 =================================================================================================================== 00:10:23.793 Total : 9398.00 36.71 0.00 0.00 0.00 0.00 0.00 00:10:23.793 00:10:24.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.730 Nvme0n1 : 7.00 9361.71 36.57 0.00 0.00 0.00 0.00 0.00 00:10:24.730 =================================================================================================================== 00:10:24.730 Total : 9361.71 36.57 0.00 0.00 0.00 0.00 0.00 00:10:24.730 00:10:25.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.668 Nvme0n1 : 8.00 9302.00 36.34 0.00 0.00 0.00 0.00 0.00 00:10:25.668 =================================================================================================================== 00:10:25.668 Total : 9302.00 36.34 0.00 0.00 0.00 0.00 0.00 00:10:25.668 00:10:26.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:26.606 Nvme0n1 : 9.00 9242.11 36.10 0.00 0.00 0.00 0.00 0.00 00:10:26.606 =================================================================================================================== 00:10:26.606 Total : 9242.11 36.10 0.00 0.00 0.00 0.00 0.00 00:10:26.606 00:10:27.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:27.550 Nvme0n1 : 10.00 9206.90 35.96 0.00 0.00 0.00 0.00 0.00 00:10:27.550 =================================================================================================================== 00:10:27.550 Total : 9206.90 35.96 0.00 0.00 0.00 0.00 0.00 00:10:27.550 00:10:27.550 00:10:27.550 Latency(us) 00:10:27.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:27.550 Nvme0n1 : 10.01 9212.56 35.99 0.00 0.00 13889.97 10474.31 38234.10 00:10:27.550 =================================================================================================================== 00:10:27.550 Total : 9212.56 35.99 0.00 0.00 13889.97 10474.31 38234.10 00:10:27.550 0 00:10:27.550 13:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65099 00:10:27.550 13:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 65099 ']' 00:10:27.550 13:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 65099 00:10:27.550 13:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:27.550 13:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:27.550 13:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65099 00:10:27.550 killing process with pid 65099 00:10:27.550 Received shutdown signal, test time was about 10.000000 seconds 00:10:27.550 00:10:27.550 Latency(us) 00:10:27.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.550 =================================================================================================================== 00:10:27.550 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:27.550 13:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:27.550 13:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:27.550 13:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65099' 00:10:27.550 13:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 65099 00:10:27.550 13:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 65099 00:10:27.808 13:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:28.067 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:28.067 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff2f3043-6317-460b-9f4e-efd4ca48296f 00:10:28.067 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:28.326 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:28.326 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:28.326 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:28.586 [2024-07-25 13:58:37.737957] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:28.586 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff2f3043-6317-460b-9f4e-efd4ca48296f 00:10:28.586 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:28.586 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff2f3043-6317-460b-9f4e-efd4ca48296f 00:10:28.586 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.586 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.586 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.586 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.586 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.586 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.586 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.586 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:28.586 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff2f3043-6317-460b-9f4e-efd4ca48296f 00:10:28.845 request: 00:10:28.845 { 00:10:28.846 "uuid": "ff2f3043-6317-460b-9f4e-efd4ca48296f", 00:10:28.846 "method": "bdev_lvol_get_lvstores", 00:10:28.846 "req_id": 1 00:10:28.846 } 00:10:28.846 Got JSON-RPC error response 00:10:28.846 response: 00:10:28.846 { 00:10:28.846 "code": -19, 00:10:28.846 "message": "No such device" 00:10:28.846 } 00:10:28.846 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:28.846 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:28.846 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:28.846 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:28.846 13:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:28.846 aio_bdev 00:10:29.105 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 80588d24-49a2-4dd5-a448-24c5454d9837 00:10:29.105 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=80588d24-49a2-4dd5-a448-24c5454d9837 00:10:29.105 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.105 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:29.105 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.105 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.105 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:29.105 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 80588d24-49a2-4dd5-a448-24c5454d9837 -t 2000 00:10:29.364 [ 00:10:29.364 { 00:10:29.364 "name": "80588d24-49a2-4dd5-a448-24c5454d9837", 00:10:29.364 "aliases": [ 00:10:29.364 "lvs/lvol" 00:10:29.364 ], 00:10:29.364 "product_name": "Logical Volume", 00:10:29.364 "block_size": 4096, 00:10:29.364 "num_blocks": 38912, 00:10:29.364 "uuid": "80588d24-49a2-4dd5-a448-24c5454d9837", 00:10:29.364 "assigned_rate_limits": { 00:10:29.364 "rw_ios_per_sec": 0, 00:10:29.364 "rw_mbytes_per_sec": 0, 00:10:29.364 "r_mbytes_per_sec": 0, 00:10:29.364 "w_mbytes_per_sec": 0 00:10:29.364 }, 00:10:29.364 "claimed": false, 00:10:29.364 "zoned": false, 00:10:29.364 "supported_io_types": { 00:10:29.364 "read": true, 00:10:29.364 "write": true, 00:10:29.364 "unmap": true, 00:10:29.364 "flush": false, 00:10:29.364 "reset": true, 00:10:29.364 "nvme_admin": false, 00:10:29.364 "nvme_io": false, 00:10:29.364 "nvme_io_md": false, 00:10:29.364 "write_zeroes": true, 00:10:29.364 "zcopy": false, 00:10:29.364 "get_zone_info": false, 00:10:29.364 "zone_management": false, 00:10:29.364 "zone_append": false, 00:10:29.364 "compare": false, 00:10:29.364 "compare_and_write": false, 00:10:29.364 "abort": false, 00:10:29.364 "seek_hole": true, 00:10:29.364 "seek_data": true, 00:10:29.364 "copy": false, 00:10:29.364 "nvme_iov_md": false 00:10:29.364 }, 00:10:29.364 "driver_specific": { 00:10:29.364 "lvol": { 00:10:29.364 "lvol_store_uuid": "ff2f3043-6317-460b-9f4e-efd4ca48296f", 00:10:29.364 "base_bdev": "aio_bdev", 00:10:29.364 "thin_provision": false, 00:10:29.364 "num_allocated_clusters": 38, 00:10:29.364 "snapshot": false, 00:10:29.364 "clone": false, 00:10:29.364 "esnap_clone": false 00:10:29.364 } 00:10:29.364 } 00:10:29.364 } 00:10:29.364 ] 00:10:29.364 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:29.364 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff2f3043-6317-460b-9f4e-efd4ca48296f 00:10:29.364 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:29.622 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:29.622 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff2f3043-6317-460b-9f4e-efd4ca48296f 00:10:29.622 13:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:29.880 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:29.880 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 80588d24-49a2-4dd5-a448-24c5454d9837 00:10:30.138 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff2f3043-6317-460b-9f4e-efd4ca48296f 00:10:30.138 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:30.395 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:30.653 00:10:30.653 real 0m16.999s 00:10:30.653 user 0m15.923s 00:10:30.653 sys 0m2.309s 00:10:30.653 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.913 ************************************ 00:10:30.913 END TEST lvs_grow_clean 00:10:30.913 ************************************ 00:10:30.913 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:30.913 ************************************ 00:10:30.913 START TEST lvs_grow_dirty 00:10:30.913 ************************************ 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:30.913 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:31.172 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:31.172 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:31.172 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:31.172 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:31.172 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:31.432 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:31.432 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:31.432 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec lvol 150 00:10:31.692 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=27828288-6012-4316-8338-0d83469ee2ae 00:10:31.692 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:31.692 13:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:31.952 [2024-07-25 13:58:41.042413] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:31.952 [2024-07-25 13:58:41.042483] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:31.952 true 00:10:31.952 13:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:31.952 13:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:32.210 13:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:32.210 13:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:32.210 13:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 27828288-6012-4316-8338-0d83469ee2ae 00:10:32.469 13:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:32.759 [2024-07-25 13:58:41.857334] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.759 13:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:33.019 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65358 00:10:33.019 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:33.019 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:33.019 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65358 /var/tmp/bdevperf.sock 00:10:33.019 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65358 ']' 00:10:33.019 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:33.019 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.019 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:33.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:33.019 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.019 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:33.019 [2024-07-25 13:58:42.134436] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:10:33.019 [2024-07-25 13:58:42.134598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65358 ] 00:10:33.019 [2024-07-25 13:58:42.273212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.278 [2024-07-25 13:58:42.379434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.278 [2024-07-25 13:58:42.422054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:33.845 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.845 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:33.845 13:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:34.103 Nvme0n1 00:10:34.103 13:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:34.362 [ 00:10:34.362 { 00:10:34.362 "name": "Nvme0n1", 00:10:34.362 "aliases": [ 00:10:34.362 "27828288-6012-4316-8338-0d83469ee2ae" 00:10:34.362 ], 00:10:34.362 "product_name": "NVMe disk", 00:10:34.362 "block_size": 4096, 00:10:34.362 "num_blocks": 38912, 00:10:34.362 "uuid": "27828288-6012-4316-8338-0d83469ee2ae", 00:10:34.362 "assigned_rate_limits": { 00:10:34.362 "rw_ios_per_sec": 0, 00:10:34.362 "rw_mbytes_per_sec": 0, 00:10:34.362 "r_mbytes_per_sec": 0, 00:10:34.362 "w_mbytes_per_sec": 0 00:10:34.362 }, 00:10:34.362 "claimed": false, 00:10:34.362 "zoned": false, 00:10:34.362 "supported_io_types": { 00:10:34.362 "read": true, 00:10:34.362 "write": true, 00:10:34.362 "unmap": true, 00:10:34.362 "flush": true, 00:10:34.362 "reset": true, 00:10:34.362 "nvme_admin": true, 00:10:34.362 "nvme_io": true, 00:10:34.362 "nvme_io_md": false, 00:10:34.362 "write_zeroes": true, 00:10:34.362 "zcopy": false, 00:10:34.362 "get_zone_info": false, 00:10:34.362 "zone_management": false, 00:10:34.362 "zone_append": false, 00:10:34.362 "compare": true, 00:10:34.362 "compare_and_write": true, 00:10:34.362 "abort": true, 00:10:34.362 "seek_hole": false, 00:10:34.362 "seek_data": false, 00:10:34.362 "copy": true, 00:10:34.362 "nvme_iov_md": false 00:10:34.362 }, 00:10:34.362 "memory_domains": [ 00:10:34.362 { 00:10:34.362 "dma_device_id": "system", 00:10:34.362 "dma_device_type": 1 00:10:34.362 } 00:10:34.362 ], 00:10:34.362 "driver_specific": { 00:10:34.362 "nvme": [ 00:10:34.362 { 00:10:34.362 "trid": { 00:10:34.362 "trtype": "TCP", 00:10:34.362 "adrfam": "IPv4", 00:10:34.362 "traddr": "10.0.0.2", 00:10:34.362 "trsvcid": "4420", 00:10:34.362 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:34.362 }, 00:10:34.362 "ctrlr_data": { 00:10:34.362 "cntlid": 1, 00:10:34.363 "vendor_id": "0x8086", 00:10:34.363 "model_number": "SPDK bdev Controller", 00:10:34.363 "serial_number": "SPDK0", 00:10:34.363 "firmware_revision": "24.09", 00:10:34.363 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:34.363 "oacs": { 00:10:34.363 "security": 0, 00:10:34.363 "format": 0, 00:10:34.363 "firmware": 0, 00:10:34.363 "ns_manage": 0 00:10:34.363 }, 00:10:34.363 "multi_ctrlr": true, 00:10:34.363 "ana_reporting": false 00:10:34.363 }, 00:10:34.363 "vs": { 00:10:34.363 "nvme_version": "1.3" 00:10:34.363 }, 00:10:34.363 "ns_data": { 00:10:34.363 "id": 1, 00:10:34.363 "can_share": true 00:10:34.363 } 00:10:34.363 } 00:10:34.363 ], 00:10:34.363 "mp_policy": "active_passive" 00:10:34.363 } 00:10:34.363 } 00:10:34.363 ] 00:10:34.363 13:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:34.363 13:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65376 00:10:34.363 13:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:34.363 Running I/O for 10 seconds... 00:10:35.742 Latency(us) 00:10:35.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.742 Nvme0n1 : 1.00 10160.00 39.69 0.00 0.00 0.00 0.00 0.00 00:10:35.742 =================================================================================================================== 00:10:35.742 Total : 10160.00 39.69 0.00 0.00 0.00 0.00 0.00 00:10:35.742 00:10:36.317 13:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:36.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.317 Nvme0n1 : 2.00 9906.00 38.70 0.00 0.00 0.00 0.00 0.00 00:10:36.317 =================================================================================================================== 00:10:36.317 Total : 9906.00 38.70 0.00 0.00 0.00 0.00 0.00 00:10:36.317 00:10:36.575 true 00:10:36.575 13:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:36.575 13:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:36.835 13:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:36.835 13:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:36.835 13:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65376 00:10:37.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.403 Nvme0n1 : 3.00 9779.00 38.20 0.00 0.00 0.00 0.00 0.00 00:10:37.403 =================================================================================================================== 00:10:37.403 Total : 9779.00 38.20 0.00 0.00 0.00 0.00 0.00 00:10:37.403 00:10:38.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.340 Nvme0n1 : 4.00 9747.25 38.08 0.00 0.00 0.00 0.00 0.00 00:10:38.340 =================================================================================================================== 00:10:38.340 Total : 9747.25 38.08 0.00 0.00 0.00 0.00 0.00 00:10:38.340 00:10:39.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.716 Nvme0n1 : 5.00 9651.00 37.70 0.00 0.00 0.00 0.00 0.00 00:10:39.716 =================================================================================================================== 00:10:39.716 Total : 9651.00 37.70 0.00 0.00 0.00 0.00 0.00 00:10:39.716 00:10:40.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.650 Nvme0n1 : 6.00 9608.83 37.53 0.00 0.00 0.00 0.00 0.00 00:10:40.650 =================================================================================================================== 00:10:40.650 Total : 9608.83 37.53 0.00 0.00 0.00 0.00 0.00 00:10:40.650 00:10:41.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.585 Nvme0n1 : 7.00 9577.86 37.41 0.00 0.00 0.00 0.00 0.00 00:10:41.585 =================================================================================================================== 00:10:41.585 Total : 9577.86 37.41 0.00 0.00 0.00 0.00 0.00 00:10:41.585 00:10:42.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.519 Nvme0n1 : 8.00 9555.12 37.32 0.00 0.00 0.00 0.00 0.00 00:10:42.519 =================================================================================================================== 00:10:42.519 Total : 9555.12 37.32 0.00 0.00 0.00 0.00 0.00 00:10:42.519 00:10:43.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.454 Nvme0n1 : 9.00 9078.44 35.46 0.00 0.00 0.00 0.00 0.00 00:10:43.454 =================================================================================================================== 00:10:43.454 Total : 9078.44 35.46 0.00 0.00 0.00 0.00 0.00 00:10:43.454 00:10:44.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.387 Nvme0n1 : 10.00 8321.30 32.51 0.00 0.00 0.00 0.00 0.00 00:10:44.387 =================================================================================================================== 00:10:44.387 Total : 8321.30 32.51 0.00 0.00 0.00 0.00 0.00 00:10:44.387 00:10:44.387 00:10:44.387 Latency(us) 00:10:44.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.387 Nvme0n1 : 10.00 8331.02 32.54 0.00 0.00 15358.77 6553.60 1238143.89 00:10:44.387 =================================================================================================================== 00:10:44.387 Total : 8331.02 32.54 0.00 0.00 15358.77 6553.60 1238143.89 00:10:44.387 0 00:10:44.387 13:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65358 00:10:44.387 13:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 65358 ']' 00:10:44.387 13:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 65358 00:10:44.387 13:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:44.387 13:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.387 13:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65358 00:10:44.387 killing process with pid 65358 00:10:44.387 13:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:44.387 13:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:44.387 13:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65358' 00:10:44.387 13:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 65358 00:10:44.387 Received shutdown signal, test time was about 10.000000 seconds 00:10:44.387 00:10:44.387 Latency(us) 00:10:44.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.387 =================================================================================================================== 00:10:44.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:44.387 13:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 65358 00:10:44.645 13:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:44.905 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:45.164 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:45.164 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:45.164 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:45.164 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:45.164 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65022 00:10:45.164 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65022 00:10:45.423 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65022 Killed "${NVMF_APP[@]}" "$@" 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=65510 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 65510 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65510 ']' 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.423 13:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:45.423 [2024-07-25 13:58:54.534141] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:10:45.423 [2024-07-25 13:58:54.534220] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.423 [2024-07-25 13:58:54.673155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.683 [2024-07-25 13:58:54.770975] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.683 [2024-07-25 13:58:54.771038] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.683 [2024-07-25 13:58:54.771048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.683 [2024-07-25 13:58:54.771056] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.683 [2024-07-25 13:58:54.771078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.683 [2024-07-25 13:58:54.771120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.683 [2024-07-25 13:58:54.812723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:46.251 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.251 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:46.251 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:46.251 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:46.251 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:46.251 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.251 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:46.509 [2024-07-25 13:58:55.680928] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:46.509 [2024-07-25 13:58:55.681361] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:46.509 [2024-07-25 13:58:55.681544] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:46.509 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:46.509 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 27828288-6012-4316-8338-0d83469ee2ae 00:10:46.509 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=27828288-6012-4316-8338-0d83469ee2ae 00:10:46.509 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:46.509 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:46.509 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:46.509 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:46.509 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:46.767 13:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27828288-6012-4316-8338-0d83469ee2ae -t 2000 00:10:47.025 [ 00:10:47.025 { 00:10:47.025 "name": "27828288-6012-4316-8338-0d83469ee2ae", 00:10:47.025 "aliases": [ 00:10:47.025 "lvs/lvol" 00:10:47.025 ], 00:10:47.025 "product_name": "Logical Volume", 00:10:47.025 "block_size": 4096, 00:10:47.025 "num_blocks": 38912, 00:10:47.026 "uuid": "27828288-6012-4316-8338-0d83469ee2ae", 00:10:47.026 "assigned_rate_limits": { 00:10:47.026 "rw_ios_per_sec": 0, 00:10:47.026 "rw_mbytes_per_sec": 0, 00:10:47.026 "r_mbytes_per_sec": 0, 00:10:47.026 "w_mbytes_per_sec": 0 00:10:47.026 }, 00:10:47.026 "claimed": false, 00:10:47.026 "zoned": false, 00:10:47.026 "supported_io_types": { 00:10:47.026 "read": true, 00:10:47.026 "write": true, 00:10:47.026 "unmap": true, 00:10:47.026 "flush": false, 00:10:47.026 "reset": true, 00:10:47.026 "nvme_admin": false, 00:10:47.026 "nvme_io": false, 00:10:47.026 "nvme_io_md": false, 00:10:47.026 "write_zeroes": true, 00:10:47.026 "zcopy": false, 00:10:47.026 "get_zone_info": false, 00:10:47.026 "zone_management": false, 00:10:47.026 "zone_append": false, 00:10:47.026 "compare": false, 00:10:47.026 "compare_and_write": false, 00:10:47.026 "abort": false, 00:10:47.026 "seek_hole": true, 00:10:47.026 "seek_data": true, 00:10:47.026 "copy": false, 00:10:47.026 "nvme_iov_md": false 00:10:47.026 }, 00:10:47.026 "driver_specific": { 00:10:47.026 "lvol": { 00:10:47.026 "lvol_store_uuid": "c85fe9c8-ebdb-4b0b-b541-c654d3e717ec", 00:10:47.026 "base_bdev": "aio_bdev", 00:10:47.026 "thin_provision": false, 00:10:47.026 "num_allocated_clusters": 38, 00:10:47.026 "snapshot": false, 00:10:47.026 "clone": false, 00:10:47.026 "esnap_clone": false 00:10:47.026 } 00:10:47.026 } 00:10:47.026 } 00:10:47.026 ] 00:10:47.026 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:47.026 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:47.026 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:47.284 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:47.284 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:47.284 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:47.284 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:47.284 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:47.542 [2024-07-25 13:58:56.736615] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:47.542 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:47.542 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:47.542 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:47.542 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.542 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.542 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.542 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.542 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.543 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.543 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.543 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:47.543 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:47.802 request: 00:10:47.802 { 00:10:47.802 "uuid": "c85fe9c8-ebdb-4b0b-b541-c654d3e717ec", 00:10:47.802 "method": "bdev_lvol_get_lvstores", 00:10:47.802 "req_id": 1 00:10:47.802 } 00:10:47.802 Got JSON-RPC error response 00:10:47.802 response: 00:10:47.802 { 00:10:47.802 "code": -19, 00:10:47.802 "message": "No such device" 00:10:47.802 } 00:10:47.802 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:47.802 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:47.802 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:47.802 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:47.802 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:48.060 aio_bdev 00:10:48.060 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 27828288-6012-4316-8338-0d83469ee2ae 00:10:48.060 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=27828288-6012-4316-8338-0d83469ee2ae 00:10:48.060 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:48.060 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:48.060 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:48.060 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:48.060 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:48.319 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27828288-6012-4316-8338-0d83469ee2ae -t 2000 00:10:48.578 [ 00:10:48.578 { 00:10:48.578 "name": "27828288-6012-4316-8338-0d83469ee2ae", 00:10:48.578 "aliases": [ 00:10:48.578 "lvs/lvol" 00:10:48.578 ], 00:10:48.578 "product_name": "Logical Volume", 00:10:48.579 "block_size": 4096, 00:10:48.579 "num_blocks": 38912, 00:10:48.579 "uuid": "27828288-6012-4316-8338-0d83469ee2ae", 00:10:48.579 "assigned_rate_limits": { 00:10:48.579 "rw_ios_per_sec": 0, 00:10:48.579 "rw_mbytes_per_sec": 0, 00:10:48.579 "r_mbytes_per_sec": 0, 00:10:48.579 "w_mbytes_per_sec": 0 00:10:48.579 }, 00:10:48.579 "claimed": false, 00:10:48.579 "zoned": false, 00:10:48.579 "supported_io_types": { 00:10:48.579 "read": true, 00:10:48.579 "write": true, 00:10:48.579 "unmap": true, 00:10:48.579 "flush": false, 00:10:48.579 "reset": true, 00:10:48.579 "nvme_admin": false, 00:10:48.579 "nvme_io": false, 00:10:48.579 "nvme_io_md": false, 00:10:48.579 "write_zeroes": true, 00:10:48.579 "zcopy": false, 00:10:48.579 "get_zone_info": false, 00:10:48.579 "zone_management": false, 00:10:48.579 "zone_append": false, 00:10:48.579 "compare": false, 00:10:48.579 "compare_and_write": false, 00:10:48.579 "abort": false, 00:10:48.579 "seek_hole": true, 00:10:48.579 "seek_data": true, 00:10:48.579 "copy": false, 00:10:48.579 "nvme_iov_md": false 00:10:48.579 }, 00:10:48.579 "driver_specific": { 00:10:48.579 "lvol": { 00:10:48.579 "lvol_store_uuid": "c85fe9c8-ebdb-4b0b-b541-c654d3e717ec", 00:10:48.579 "base_bdev": "aio_bdev", 00:10:48.579 "thin_provision": false, 00:10:48.579 "num_allocated_clusters": 38, 00:10:48.579 "snapshot": false, 00:10:48.579 "clone": false, 00:10:48.579 "esnap_clone": false 00:10:48.579 } 00:10:48.579 } 00:10:48.579 } 00:10:48.579 ] 00:10:48.579 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:48.579 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:48.579 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:48.579 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:48.579 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:48.579 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:48.838 13:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:48.838 13:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 27828288-6012-4316-8338-0d83469ee2ae 00:10:49.097 13:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c85fe9c8-ebdb-4b0b-b541-c654d3e717ec 00:10:49.355 13:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:49.355 13:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:49.925 00:10:49.925 real 0m18.918s 00:10:49.925 user 0m40.620s 00:10:49.925 sys 0m6.648s 00:10:49.925 ************************************ 00:10:49.925 END TEST lvs_grow_dirty 00:10:49.925 ************************************ 00:10:49.925 13:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.925 13:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:49.925 13:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:49.925 13:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:49.925 13:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:49.925 13:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:49.925 13:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:49.925 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:49.925 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:49.925 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:49.926 nvmf_trace.0 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:49.926 rmmod nvme_tcp 00:10:49.926 rmmod nvme_fabrics 00:10:49.926 rmmod nvme_keyring 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 65510 ']' 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 65510 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 65510 ']' 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 65510 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.926 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65510 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.186 killing process with pid 65510 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65510' 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 65510 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 65510 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.186 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:50.446 00:10:50.446 real 0m38.354s 00:10:50.446 user 1m2.036s 00:10:50.446 sys 0m9.800s 00:10:50.446 ************************************ 00:10:50.446 END TEST nvmf_lvs_grow 00:10:50.446 ************************************ 00:10:50.446 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.446 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:50.446 13:58:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:50.446 13:58:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:50.446 13:58:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.446 13:58:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:50.446 ************************************ 00:10:50.446 START TEST nvmf_bdev_io_wait 00:10:50.446 ************************************ 00:10:50.446 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:50.446 * Looking for test storage... 00:10:50.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:50.446 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:50.446 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:50.447 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:50.708 Cannot find device "nvmf_tgt_br" 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:50.708 Cannot find device "nvmf_tgt_br2" 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:50.708 Cannot find device "nvmf_tgt_br" 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:50.708 Cannot find device "nvmf_tgt_br2" 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:50.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:50.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:50.708 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:50.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:10:50.708 00:10:50.708 --- 10.0.0.2 ping statistics --- 00:10:50.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.708 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:10:50.708 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:50.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:50.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:10:50.708 00:10:50.708 --- 10.0.0.3 ping statistics --- 00:10:50.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.708 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:50.708 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:50.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:10:50.968 00:10:50.968 --- 10.0.0.1 ping statistics --- 00:10:50.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.968 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=65816 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 65816 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 65816 ']' 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.968 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:50.968 [2024-07-25 13:59:00.106885] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:10:50.968 [2024-07-25 13:59:00.106935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.968 [2024-07-25 13:59:00.244511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.227 [2024-07-25 13:59:00.327269] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.227 [2024-07-25 13:59:00.327338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.227 [2024-07-25 13:59:00.327344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.227 [2024-07-25 13:59:00.327349] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.227 [2024-07-25 13:59:00.327354] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.227 [2024-07-25 13:59:00.327565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.227 [2024-07-25 13:59:00.327840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.227 [2024-07-25 13:59:00.327769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.227 [2024-07-25 13:59:00.327926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.796 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.796 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:51.796 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:51.796 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:51.796 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.796 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.796 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:51.796 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.797 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.797 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.797 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:51.797 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.797 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.797 [2024-07-25 13:59:01.032262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:51.797 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.797 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.797 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.797 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.797 [2024-07-25 13:59:01.047497] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.797 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.797 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:51.797 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.797 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.797 Malloc0 00:10:51.797 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.797 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:51.797 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.797 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:52.057 [2024-07-25 13:59:01.121259] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=65855 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=65857 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:52.057 { 00:10:52.057 "params": { 00:10:52.057 "name": "Nvme$subsystem", 00:10:52.057 "trtype": "$TEST_TRANSPORT", 00:10:52.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:52.057 "adrfam": "ipv4", 00:10:52.057 "trsvcid": "$NVMF_PORT", 00:10:52.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:52.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:52.057 "hdgst": ${hdgst:-false}, 00:10:52.057 "ddgst": ${ddgst:-false} 00:10:52.057 }, 00:10:52.057 "method": "bdev_nvme_attach_controller" 00:10:52.057 } 00:10:52.057 EOF 00:10:52.057 )") 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=65859 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:52.057 { 00:10:52.057 "params": { 00:10:52.057 "name": "Nvme$subsystem", 00:10:52.057 "trtype": "$TEST_TRANSPORT", 00:10:52.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:52.057 "adrfam": "ipv4", 00:10:52.057 "trsvcid": "$NVMF_PORT", 00:10:52.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:52.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:52.057 "hdgst": ${hdgst:-false}, 00:10:52.057 "ddgst": ${ddgst:-false} 00:10:52.057 }, 00:10:52.057 "method": "bdev_nvme_attach_controller" 00:10:52.057 } 00:10:52.057 EOF 00:10:52.057 )") 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=65862 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:52.057 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:52.057 { 00:10:52.057 "params": { 00:10:52.057 "name": "Nvme$subsystem", 00:10:52.057 "trtype": "$TEST_TRANSPORT", 00:10:52.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:52.057 "adrfam": "ipv4", 00:10:52.057 "trsvcid": "$NVMF_PORT", 00:10:52.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:52.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:52.058 "hdgst": ${hdgst:-false}, 00:10:52.058 "ddgst": ${ddgst:-false} 00:10:52.058 }, 00:10:52.058 "method": "bdev_nvme_attach_controller" 00:10:52.058 } 00:10:52.058 EOF 00:10:52.058 )") 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:52.058 "params": { 00:10:52.058 "name": "Nvme1", 00:10:52.058 "trtype": "tcp", 00:10:52.058 "traddr": "10.0.0.2", 00:10:52.058 "adrfam": "ipv4", 00:10:52.058 "trsvcid": "4420", 00:10:52.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:52.058 "hdgst": false, 00:10:52.058 "ddgst": false 00:10:52.058 }, 00:10:52.058 "method": "bdev_nvme_attach_controller" 00:10:52.058 }' 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:52.058 { 00:10:52.058 "params": { 00:10:52.058 "name": "Nvme$subsystem", 00:10:52.058 "trtype": "$TEST_TRANSPORT", 00:10:52.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:52.058 "adrfam": "ipv4", 00:10:52.058 "trsvcid": "$NVMF_PORT", 00:10:52.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:52.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:52.058 "hdgst": ${hdgst:-false}, 00:10:52.058 "ddgst": ${ddgst:-false} 00:10:52.058 }, 00:10:52.058 "method": "bdev_nvme_attach_controller" 00:10:52.058 } 00:10:52.058 EOF 00:10:52.058 )") 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:52.058 "params": { 00:10:52.058 "name": "Nvme1", 00:10:52.058 "trtype": "tcp", 00:10:52.058 "traddr": "10.0.0.2", 00:10:52.058 "adrfam": "ipv4", 00:10:52.058 "trsvcid": "4420", 00:10:52.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:52.058 "hdgst": false, 00:10:52.058 "ddgst": false 00:10:52.058 }, 00:10:52.058 "method": "bdev_nvme_attach_controller" 00:10:52.058 }' 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:52.058 "params": { 00:10:52.058 "name": "Nvme1", 00:10:52.058 "trtype": "tcp", 00:10:52.058 "traddr": "10.0.0.2", 00:10:52.058 "adrfam": "ipv4", 00:10:52.058 "trsvcid": "4420", 00:10:52.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:52.058 "hdgst": false, 00:10:52.058 "ddgst": false 00:10:52.058 }, 00:10:52.058 "method": "bdev_nvme_attach_controller" 00:10:52.058 }' 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:52.058 "params": { 00:10:52.058 "name": "Nvme1", 00:10:52.058 "trtype": "tcp", 00:10:52.058 "traddr": "10.0.0.2", 00:10:52.058 "adrfam": "ipv4", 00:10:52.058 "trsvcid": "4420", 00:10:52.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:52.058 "hdgst": false, 00:10:52.058 "ddgst": false 00:10:52.058 }, 00:10:52.058 "method": "bdev_nvme_attach_controller" 00:10:52.058 }' 00:10:52.058 13:59:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 65855 00:10:52.058 [2024-07-25 13:59:01.196582] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:10:52.058 [2024-07-25 13:59:01.196732] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:52.058 [2024-07-25 13:59:01.200773] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:10:52.058 [2024-07-25 13:59:01.200905] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:52.058 [2024-07-25 13:59:01.202270] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:10:52.058 [2024-07-25 13:59:01.202383] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:52.058 [2024-07-25 13:59:01.205851] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:10:52.058 [2024-07-25 13:59:01.207138] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:52.386 [2024-07-25 13:59:01.392847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.386 [2024-07-25 13:59:01.460278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.386 [2024-07-25 13:59:01.475630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:52.386 [2024-07-25 13:59:01.514034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:52.386 [2024-07-25 13:59:01.538602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.386 [2024-07-25 13:59:01.545453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:52.386 [2024-07-25 13:59:01.583907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:52.386 [2024-07-25 13:59:01.598593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.386 Running I/O for 1 seconds... 00:10:52.386 [2024-07-25 13:59:01.623791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:52.386 [2024-07-25 13:59:01.662464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:52.646 Running I/O for 1 seconds... 00:10:52.646 [2024-07-25 13:59:01.684188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:52.646 [2024-07-25 13:59:01.722979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:52.646 Running I/O for 1 seconds... 00:10:52.646 Running I/O for 1 seconds... 00:10:53.584 00:10:53.584 Latency(us) 00:10:53.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.584 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:53.584 Nvme1n1 : 1.02 7587.57 29.64 0.00 0.00 16670.26 7240.44 33655.17 00:10:53.584 =================================================================================================================== 00:10:53.584 Total : 7587.57 29.64 0.00 0.00 16670.26 7240.44 33655.17 00:10:53.584 00:10:53.584 Latency(us) 00:10:53.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.584 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:53.584 Nvme1n1 : 1.01 10580.85 41.33 0.00 0.00 12046.80 7555.24 25642.03 00:10:53.584 =================================================================================================================== 00:10:53.584 Total : 10580.85 41.33 0.00 0.00 12046.80 7555.24 25642.03 00:10:53.584 00:10:53.584 Latency(us) 00:10:53.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.584 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:53.584 Nvme1n1 : 1.00 210487.17 822.22 0.00 0.00 605.93 305.86 958.71 00:10:53.584 =================================================================================================================== 00:10:53.584 Total : 210487.17 822.22 0.00 0.00 605.93 305.86 958.71 00:10:53.584 13:59:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 65857 00:10:53.584 00:10:53.584 Latency(us) 00:10:53.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.584 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:53.584 Nvme1n1 : 1.01 7744.86 30.25 0.00 0.00 16473.14 6210.18 43270.93 00:10:53.584 =================================================================================================================== 00:10:53.584 Total : 7744.86 30.25 0.00 0.00 16473.14 6210.18 43270.93 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 65859 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 65862 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:53.842 rmmod nvme_tcp 00:10:53.842 rmmod nvme_fabrics 00:10:53.842 rmmod nvme_keyring 00:10:53.842 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 65816 ']' 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 65816 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 65816 ']' 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 65816 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65816 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.102 killing process with pid 65816 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65816' 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 65816 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 65816 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.102 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:54.361 00:10:54.361 real 0m3.867s 00:10:54.361 user 0m17.117s 00:10:54.361 sys 0m1.929s 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.361 ************************************ 00:10:54.361 END TEST nvmf_bdev_io_wait 00:10:54.361 ************************************ 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.361 ************************************ 00:10:54.361 START TEST nvmf_queue_depth 00:10:54.361 ************************************ 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:54.361 * Looking for test storage... 00:10:54.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.361 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:54.362 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:54.621 Cannot find device "nvmf_tgt_br" 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:54.621 Cannot find device "nvmf_tgt_br2" 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:54.621 Cannot find device "nvmf_tgt_br" 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:54.621 Cannot find device "nvmf_tgt_br2" 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:54.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:54.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:54.621 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:54.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:10:54.880 00:10:54.880 --- 10.0.0.2 ping statistics --- 00:10:54.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.880 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:54.880 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:54.880 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:10:54.880 00:10:54.880 --- 10.0.0.3 ping statistics --- 00:10:54.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.880 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:54.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:54.880 00:10:54.880 --- 10.0.0.1 ping statistics --- 00:10:54.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.880 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:54.880 13:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66094 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66094 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 66094 ']' 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.880 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:54.880 [2024-07-25 13:59:04.070111] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:10:54.880 [2024-07-25 13:59:04.070178] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.139 [2024-07-25 13:59:04.211086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.139 [2024-07-25 13:59:04.308445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.139 [2024-07-25 13:59:04.308490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.139 [2024-07-25 13:59:04.308497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.139 [2024-07-25 13:59:04.308502] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.139 [2024-07-25 13:59:04.308506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.139 [2024-07-25 13:59:04.308526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.139 [2024-07-25 13:59:04.350310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:55.706 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.706 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:55.706 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:55.706 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:55.706 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.706 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.706 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.706 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.706 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.706 [2024-07-25 13:59:04.996158] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.706 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.706 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:55.706 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.706 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.965 Malloc0 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.965 [2024-07-25 13:59:05.065111] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66126 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66126 /var/tmp/bdevperf.sock 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 66126 ']' 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:55.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.965 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.965 [2024-07-25 13:59:05.119075] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:10:55.965 [2024-07-25 13:59:05.119233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66126 ] 00:10:55.965 [2024-07-25 13:59:05.257399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.225 [2024-07-25 13:59:05.359248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.225 [2024-07-25 13:59:05.400627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:56.856 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.856 13:59:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:56.856 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:56.856 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.856 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:56.856 NVMe0n1 00:10:56.856 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.856 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:57.115 Running I/O for 10 seconds... 00:11:07.096 00:11:07.096 Latency(us) 00:11:07.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.096 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:07.096 Verification LBA range: start 0x0 length 0x4000 00:11:07.096 NVMe0n1 : 10.08 9762.48 38.13 0.00 0.00 104482.30 20490.73 76010.31 00:11:07.096 =================================================================================================================== 00:11:07.096 Total : 9762.48 38.13 0.00 0.00 104482.30 20490.73 76010.31 00:11:07.096 0 00:11:07.096 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66126 00:11:07.096 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 66126 ']' 00:11:07.096 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 66126 00:11:07.096 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:07.096 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.096 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66126 00:11:07.096 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:07.096 killing process with pid 66126 00:11:07.096 Received shutdown signal, test time was about 10.000000 seconds 00:11:07.096 00:11:07.096 Latency(us) 00:11:07.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.096 =================================================================================================================== 00:11:07.096 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:07.096 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:07.096 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66126' 00:11:07.096 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 66126 00:11:07.096 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 66126 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:07.396 rmmod nvme_tcp 00:11:07.396 rmmod nvme_fabrics 00:11:07.396 rmmod nvme_keyring 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66094 ']' 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66094 00:11:07.396 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 66094 ']' 00:11:07.397 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 66094 00:11:07.397 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:07.397 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.397 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66094 00:11:07.397 killing process with pid 66094 00:11:07.397 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:07.397 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:07.397 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66094' 00:11:07.397 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 66094 00:11:07.397 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 66094 00:11:07.655 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:07.655 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:07.655 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:07.655 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.655 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:07.655 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.655 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.655 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.655 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:07.655 00:11:07.655 real 0m13.422s 00:11:07.655 user 0m23.365s 00:11:07.655 sys 0m2.025s 00:11:07.655 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.655 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:07.655 ************************************ 00:11:07.655 END TEST nvmf_queue_depth 00:11:07.655 ************************************ 00:11:07.916 13:59:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:07.916 13:59:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.916 13:59:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.916 13:59:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:07.916 ************************************ 00:11:07.916 START TEST nvmf_target_multipath 00:11:07.916 ************************************ 00:11:07.916 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:07.916 * Looking for test storage... 00:11:07.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:07.916 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:07.917 Cannot find device "nvmf_tgt_br" 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:07.917 Cannot find device "nvmf_tgt_br2" 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:07.917 Cannot find device "nvmf_tgt_br" 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:11:07.917 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:08.176 Cannot find device "nvmf_tgt_br2" 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:08.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:08.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:08.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:11:08.176 00:11:08.176 --- 10.0.0.2 ping statistics --- 00:11:08.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.176 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:08.176 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:08.176 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:11:08.176 00:11:08.176 --- 10.0.0.3 ping statistics --- 00:11:08.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.176 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:08.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:11:08.176 00:11:08.176 --- 10.0.0.1 ping statistics --- 00:11:08.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.176 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.176 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=66443 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 66443 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 66443 ']' 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.434 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:08.434 [2024-07-25 13:59:17.562607] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:11:08.434 [2024-07-25 13:59:17.562671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.434 [2024-07-25 13:59:17.701393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.691 [2024-07-25 13:59:17.807225] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.691 [2024-07-25 13:59:17.807384] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.691 [2024-07-25 13:59:17.807432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.691 [2024-07-25 13:59:17.807495] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.692 [2024-07-25 13:59:17.807536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.692 [2024-07-25 13:59:17.807663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.692 [2024-07-25 13:59:17.807737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.692 [2024-07-25 13:59:17.807769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.692 [2024-07-25 13:59:17.807773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.692 [2024-07-25 13:59:17.872676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:09.257 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.257 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:11:09.257 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.257 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.257 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:09.257 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.257 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:09.516 [2024-07-25 13:59:18.702807] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.516 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:09.774 Malloc0 00:11:09.774 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:10.032 13:59:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:10.290 13:59:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.290 [2024-07-25 13:59:19.589181] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.549 13:59:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:10.549 [2024-07-25 13:59:19.789005] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:10.549 13:59:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid=ae1cc223-8955-4554-9c53-a88c4ce7ab12 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:11:10.809 13:59:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid=ae1cc223-8955-4554-9c53-a88c4ce7ab12 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:10.809 13:59:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.809 13:59:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:11:10.809 13:59:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.809 13:59:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:10.809 13:59:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:13.344 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:13.345 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=66535 00:11:13.345 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:13.345 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:13.345 [global] 00:11:13.345 thread=1 00:11:13.345 invalidate=1 00:11:13.345 rw=randrw 00:11:13.345 time_based=1 00:11:13.345 runtime=6 00:11:13.345 ioengine=libaio 00:11:13.345 direct=1 00:11:13.345 bs=4096 00:11:13.345 iodepth=128 00:11:13.345 norandommap=0 00:11:13.345 numjobs=1 00:11:13.345 00:11:13.345 verify_dump=1 00:11:13.345 verify_backlog=512 00:11:13.345 verify_state_save=0 00:11:13.345 do_verify=1 00:11:13.345 verify=crc32c-intel 00:11:13.345 [job0] 00:11:13.345 filename=/dev/nvme0n1 00:11:13.345 Could not set queue depth (nvme0n1) 00:11:13.345 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.345 fio-3.35 00:11:13.345 Starting 1 thread 00:11:13.912 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:14.177 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:14.436 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:14.694 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:14.694 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:14.694 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:14.694 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:14.694 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:14.694 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:14.694 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:14.695 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:14.695 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:14.695 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:14.695 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:14.695 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:14.695 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:14.695 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 66535 00:11:19.968 00:11:19.968 job0: (groupid=0, jobs=1): err= 0: pid=66556: Thu Jul 25 13:59:28 2024 00:11:19.968 read: IOPS=12.0k, BW=46.9MiB/s (49.2MB/s)(282MiB/6002msec) 00:11:19.968 slat (usec): min=3, max=5468, avg=46.54, stdev=176.52 00:11:19.968 clat (usec): min=531, max=16392, avg=7266.42, stdev=1381.65 00:11:19.968 lat (usec): min=540, max=16403, avg=7312.96, stdev=1387.24 00:11:19.968 clat percentiles (usec): 00:11:19.968 | 1.00th=[ 4080], 5.00th=[ 5211], 10.00th=[ 5932], 20.00th=[ 6521], 00:11:19.968 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7308], 00:11:19.968 | 70.00th=[ 7570], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[10421], 00:11:19.968 | 99.00th=[11600], 99.50th=[11994], 99.90th=[13304], 99.95th=[15139], 00:11:19.968 | 99.99th=[15926] 00:11:19.968 bw ( KiB/s): min=15536, max=31520, per=52.31%, avg=25131.09, stdev=5517.06, samples=11 00:11:19.968 iops : min= 3884, max= 7880, avg=6282.73, stdev=1379.31, samples=11 00:11:19.968 write: IOPS=6997, BW=27.3MiB/s (28.7MB/s)(147MiB/5374msec); 0 zone resets 00:11:19.968 slat (usec): min=12, max=1421, avg=59.89, stdev=111.27 00:11:19.968 clat (usec): min=392, max=16106, avg=6308.50, stdev=1199.52 00:11:19.968 lat (usec): min=431, max=16134, avg=6368.39, stdev=1202.63 00:11:19.968 clat percentiles (usec): 00:11:19.968 | 1.00th=[ 3228], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5473], 00:11:19.968 | 30.00th=[ 5932], 40.00th=[ 6194], 50.00th=[ 6390], 60.00th=[ 6652], 00:11:19.968 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7373], 95.00th=[ 7767], 00:11:19.968 | 99.00th=[10028], 99.50th=[10683], 99.90th=[14746], 99.95th=[15139], 00:11:19.968 | 99.99th=[15533] 00:11:19.968 bw ( KiB/s): min=16408, max=31176, per=89.73%, avg=25116.82, stdev=5155.13, samples=11 00:11:19.968 iops : min= 4102, max= 7794, avg=6279.18, stdev=1288.80, samples=11 00:11:19.968 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:19.968 lat (msec) : 2=0.10%, 4=1.65%, 10=93.77%, 20=4.46% 00:11:19.968 cpu : usr=5.85%, sys=29.68%, ctx=6831, majf=0, minf=108 00:11:19.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:19.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:19.968 issued rwts: total=72081,37607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:19.968 00:11:19.968 Run status group 0 (all jobs): 00:11:19.968 READ: bw=46.9MiB/s (49.2MB/s), 46.9MiB/s-46.9MiB/s (49.2MB/s-49.2MB/s), io=282MiB (295MB), run=6002-6002msec 00:11:19.968 WRITE: bw=27.3MiB/s (28.7MB/s), 27.3MiB/s-27.3MiB/s (28.7MB/s-28.7MB/s), io=147MiB (154MB), run=5374-5374msec 00:11:19.968 00:11:19.968 Disk stats (read/write): 00:11:19.968 nvme0n1: ios=70962/37164, merge=0/0, ticks=478871/209719, in_queue=688590, util=98.63% 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=66634 00:11:19.968 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:19.968 [global] 00:11:19.968 thread=1 00:11:19.968 invalidate=1 00:11:19.968 rw=randrw 00:11:19.968 time_based=1 00:11:19.968 runtime=6 00:11:19.968 ioengine=libaio 00:11:19.968 direct=1 00:11:19.968 bs=4096 00:11:19.968 iodepth=128 00:11:19.968 norandommap=0 00:11:19.968 numjobs=1 00:11:19.968 00:11:19.968 verify_dump=1 00:11:19.968 verify_backlog=512 00:11:19.968 verify_state_save=0 00:11:19.968 do_verify=1 00:11:19.968 verify=crc32c-intel 00:11:19.968 [job0] 00:11:19.968 filename=/dev/nvme0n1 00:11:19.968 Could not set queue depth (nvme0n1) 00:11:19.968 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.968 fio-3.35 00:11:19.968 Starting 1 thread 00:11:20.907 13:59:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:20.907 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:21.169 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:21.434 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:21.701 13:59:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 66634 00:11:25.938 00:11:25.938 job0: (groupid=0, jobs=1): err= 0: pid=66661: Thu Jul 25 13:59:35 2024 00:11:25.938 read: IOPS=12.4k, BW=48.3MiB/s (50.7MB/s)(290MiB/6003msec) 00:11:25.938 slat (usec): min=3, max=9161, avg=39.89, stdev=163.42 00:11:25.938 clat (usec): min=306, max=17751, avg=7154.75, stdev=1811.11 00:11:25.938 lat (usec): min=316, max=17770, avg=7194.64, stdev=1823.64 00:11:25.938 clat percentiles (usec): 00:11:25.938 | 1.00th=[ 2900], 5.00th=[ 3818], 10.00th=[ 4621], 20.00th=[ 5866], 00:11:25.938 | 30.00th=[ 6652], 40.00th=[ 7111], 50.00th=[ 7373], 60.00th=[ 7570], 00:11:25.938 | 70.00th=[ 7832], 80.00th=[ 8160], 90.00th=[ 8717], 95.00th=[10421], 00:11:25.938 | 99.00th=[12125], 99.50th=[12649], 99.90th=[16188], 99.95th=[17695], 00:11:25.938 | 99.99th=[17695] 00:11:25.938 bw ( KiB/s): min=12240, max=40496, per=53.49%, avg=26456.55, stdev=8078.62, samples=11 00:11:25.938 iops : min= 3060, max=10124, avg=6614.09, stdev=2019.64, samples=11 00:11:25.938 write: IOPS=7239, BW=28.3MiB/s (29.7MB/s)(148MiB/5238msec); 0 zone resets 00:11:25.938 slat (usec): min=9, max=2908, avg=55.38, stdev=106.58 00:11:25.938 clat (usec): min=581, max=14872, avg=6076.26, stdev=1598.14 00:11:25.938 lat (usec): min=696, max=14908, avg=6131.64, stdev=1610.28 00:11:25.938 clat percentiles (usec): 00:11:25.938 | 1.00th=[ 2606], 5.00th=[ 3359], 10.00th=[ 3785], 20.00th=[ 4424], 00:11:25.938 | 30.00th=[ 5145], 40.00th=[ 6063], 50.00th=[ 6521], 60.00th=[ 6783], 00:11:25.938 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7701], 95.00th=[ 8029], 00:11:25.938 | 99.00th=[10028], 99.50th=[10814], 99.90th=[12911], 99.95th=[13566], 00:11:25.938 | 99.99th=[14615] 00:11:25.938 bw ( KiB/s): min=12472, max=40968, per=91.11%, avg=26384.00, stdev=7898.05, samples=11 00:11:25.938 iops : min= 3118, max=10242, avg=6596.00, stdev=1974.51, samples=11 00:11:25.938 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:25.938 lat (msec) : 2=0.22%, 4=8.42%, 10=87.33%, 20=4.02% 00:11:25.938 cpu : usr=5.96%, sys=30.14%, ctx=7549, majf=0, minf=104 00:11:25.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:25.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.938 issued rwts: total=74232,37919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.938 00:11:25.938 Run status group 0 (all jobs): 00:11:25.938 READ: bw=48.3MiB/s (50.7MB/s), 48.3MiB/s-48.3MiB/s (50.7MB/s-50.7MB/s), io=290MiB (304MB), run=6003-6003msec 00:11:25.938 WRITE: bw=28.3MiB/s (29.7MB/s), 28.3MiB/s-28.3MiB/s (29.7MB/s-29.7MB/s), io=148MiB (155MB), run=5238-5238msec 00:11:25.938 00:11:25.938 Disk stats (read/write): 00:11:25.938 nvme0n1: ios=72825/37919, merge=0/0, ticks=480476/205061, in_queue=685537, util=98.63% 00:11:25.938 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:26.198 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.198 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:11:26.198 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:26.198 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.198 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.198 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:26.198 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:11:26.198 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.458 rmmod nvme_tcp 00:11:26.458 rmmod nvme_fabrics 00:11:26.458 rmmod nvme_keyring 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 66443 ']' 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 66443 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 66443 ']' 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 66443 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.458 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66443 00:11:26.718 killing process with pid 66443 00:11:26.718 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.718 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.718 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66443' 00:11:26.718 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 66443 00:11:26.718 13:59:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 66443 00:11:26.719 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.719 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:26.719 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:26.719 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.719 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.719 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.719 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.719 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.979 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:26.979 ************************************ 00:11:26.979 END TEST nvmf_target_multipath 00:11:26.979 ************************************ 00:11:26.979 00:11:26.979 real 0m19.091s 00:11:26.979 user 1m12.319s 00:11:26.979 sys 0m9.106s 00:11:26.979 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.979 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:26.979 13:59:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:26.979 13:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:26.979 13:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.979 13:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:26.979 ************************************ 00:11:26.979 START TEST nvmf_zcopy 00:11:26.979 ************************************ 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:26.980 * Looking for test storage... 00:11:26.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.980 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.240 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:27.241 Cannot find device "nvmf_tgt_br" 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.241 Cannot find device "nvmf_tgt_br2" 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:27.241 Cannot find device "nvmf_tgt_br" 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:27.241 Cannot find device "nvmf_tgt_br2" 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:27.241 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:27.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:11:27.501 00:11:27.501 --- 10.0.0.2 ping statistics --- 00:11:27.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.501 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:27.501 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:27.501 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:11:27.501 00:11:27.501 --- 10.0.0.3 ping statistics --- 00:11:27.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.501 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:27.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:11:27.501 00:11:27.501 --- 10.0.0.1 ping statistics --- 00:11:27.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.501 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=66907 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 66907 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 66907 ']' 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:27.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:27.501 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:27.501 [2024-07-25 13:59:36.758283] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:11:27.501 [2024-07-25 13:59:36.758360] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.759 [2024-07-25 13:59:36.893329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.759 [2024-07-25 13:59:37.046401] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.759 [2024-07-25 13:59:37.046574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.759 [2024-07-25 13:59:37.046621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.759 [2024-07-25 13:59:37.046668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.759 [2024-07-25 13:59:37.046687] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.759 [2024-07-25 13:59:37.046741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.018 [2024-07-25 13:59:37.122936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:28.585 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.585 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:28.585 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:28.585 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.585 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.585 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.585 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:28.585 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:28.585 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.586 [2024-07-25 13:59:37.733251] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.586 [2024-07-25 13:59:37.753371] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.586 malloc0 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:28.586 { 00:11:28.586 "params": { 00:11:28.586 "name": "Nvme$subsystem", 00:11:28.586 "trtype": "$TEST_TRANSPORT", 00:11:28.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:28.586 "adrfam": "ipv4", 00:11:28.586 "trsvcid": "$NVMF_PORT", 00:11:28.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:28.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:28.586 "hdgst": ${hdgst:-false}, 00:11:28.586 "ddgst": ${ddgst:-false} 00:11:28.586 }, 00:11:28.586 "method": "bdev_nvme_attach_controller" 00:11:28.586 } 00:11:28.586 EOF 00:11:28.586 )") 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:28.586 13:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:28.586 "params": { 00:11:28.586 "name": "Nvme1", 00:11:28.586 "trtype": "tcp", 00:11:28.586 "traddr": "10.0.0.2", 00:11:28.586 "adrfam": "ipv4", 00:11:28.586 "trsvcid": "4420", 00:11:28.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:28.586 "hdgst": false, 00:11:28.586 "ddgst": false 00:11:28.586 }, 00:11:28.586 "method": "bdev_nvme_attach_controller" 00:11:28.586 }' 00:11:28.586 [2024-07-25 13:59:37.862337] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:11:28.586 [2024-07-25 13:59:37.862467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66940 ] 00:11:28.845 [2024-07-25 13:59:38.001369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.845 [2024-07-25 13:59:38.105070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.103 [2024-07-25 13:59:38.156141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:29.103 Running I/O for 10 seconds... 00:11:39.079 00:11:39.079 Latency(us) 00:11:39.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.079 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:39.079 Verification LBA range: start 0x0 length 0x1000 00:11:39.079 Nvme1n1 : 10.01 6618.46 51.71 0.00 0.00 19286.13 2232.23 29763.07 00:11:39.079 =================================================================================================================== 00:11:39.079 Total : 6618.46 51.71 0.00 0.00 19286.13 2232.23 29763.07 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67062 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:39.339 { 00:11:39.339 "params": { 00:11:39.339 "name": "Nvme$subsystem", 00:11:39.339 "trtype": "$TEST_TRANSPORT", 00:11:39.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:39.339 "adrfam": "ipv4", 00:11:39.339 "trsvcid": "$NVMF_PORT", 00:11:39.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:39.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:39.339 "hdgst": ${hdgst:-false}, 00:11:39.339 "ddgst": ${ddgst:-false} 00:11:39.339 }, 00:11:39.339 "method": "bdev_nvme_attach_controller" 00:11:39.339 } 00:11:39.339 EOF 00:11:39.339 )") 00:11:39.339 [2024-07-25 13:59:48.462034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.462080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:39.339 [2024-07-25 13:59:48.473984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.474018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:39.339 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:39.339 "params": { 00:11:39.339 "name": "Nvme1", 00:11:39.339 "trtype": "tcp", 00:11:39.339 "traddr": "10.0.0.2", 00:11:39.339 "adrfam": "ipv4", 00:11:39.339 "trsvcid": "4420", 00:11:39.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.339 "hdgst": false, 00:11:39.339 "ddgst": false 00:11:39.339 }, 00:11:39.339 "method": "bdev_nvme_attach_controller" 00:11:39.339 }' 00:11:39.339 [2024-07-25 13:59:48.485961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.485990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.489275] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:11:39.339 [2024-07-25 13:59:48.489345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67062 ] 00:11:39.339 [2024-07-25 13:59:48.497924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.498002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.509910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.509984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.521888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.521963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.533862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.533931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.545849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.545915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.557823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.557892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.569811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.569888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.581792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.581867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.593761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.593829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.605765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.605829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.617727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.617803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.627744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.339 [2024-07-25 13:59:48.629714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.629791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.339 [2024-07-25 13:59:48.641702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.339 [2024-07-25 13:59:48.641800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.653673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.653758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.665662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.665740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.677634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.677669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.689633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.689671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.701611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.701648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.713589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.713627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.725576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.725613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.733534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.733572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.734064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.599 [2024-07-25 13:59:48.741519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.741546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.753508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.753550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.769494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.769533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.781479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.781522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.784114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:39.599 [2024-07-25 13:59:48.793451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.599 [2024-07-25 13:59:48.793492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.599 [2024-07-25 13:59:48.805416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.600 [2024-07-25 13:59:48.805445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.600 [2024-07-25 13:59:48.817391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.600 [2024-07-25 13:59:48.817416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.600 [2024-07-25 13:59:48.829393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.600 [2024-07-25 13:59:48.829421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.600 [2024-07-25 13:59:48.841426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.600 [2024-07-25 13:59:48.841457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.600 [2024-07-25 13:59:48.853394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.600 [2024-07-25 13:59:48.853421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.600 [2024-07-25 13:59:48.865376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.600 [2024-07-25 13:59:48.865402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.600 [2024-07-25 13:59:48.877361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.600 [2024-07-25 13:59:48.877388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.600 [2024-07-25 13:59:48.889351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.600 [2024-07-25 13:59:48.889376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.600 Running I/O for 5 seconds... 00:11:39.600 [2024-07-25 13:59:48.901326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.600 [2024-07-25 13:59:48.901404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.860 [2024-07-25 13:59:48.918199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.860 [2024-07-25 13:59:48.918286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.860 [2024-07-25 13:59:48.933265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.860 [2024-07-25 13:59:48.933353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.860 [2024-07-25 13:59:48.949274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.860 [2024-07-25 13:59:48.949409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:48.960261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:48.960390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:48.975306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:48.975381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:48.990032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:48.990110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:49.005763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:49.005841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:49.022598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:49.022678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:49.039328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:49.039399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:49.055860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:49.055931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:49.072152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:49.072263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:49.083794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:49.083831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:49.099617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:49.099654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:49.115767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:49.115802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:49.132348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:49.132383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:49.148508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:49.148557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.861 [2024-07-25 13:59:49.158924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.861 [2024-07-25 13:59:49.158972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.174015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.174055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.187953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.187988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.202932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.202964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.214293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.214347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.230748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.230785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.246058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.246096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.257281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.257325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.273080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.273113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.289433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.289466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.307264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.307332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.321733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.321780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.337003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.337040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.353717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.353757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.369894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.369936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.386562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.386600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.403069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.403112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.131 [2024-07-25 13:59:49.420604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.131 [2024-07-25 13:59:49.420648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.392 [2024-07-25 13:59:49.435674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.392 [2024-07-25 13:59:49.435723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.392 [2024-07-25 13:59:49.451686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.392 [2024-07-25 13:59:49.451734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.392 [2024-07-25 13:59:49.469124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.392 [2024-07-25 13:59:49.469179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.392 [2024-07-25 13:59:49.485126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.392 [2024-07-25 13:59:49.485174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.392 [2024-07-25 13:59:49.496272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.392 [2024-07-25 13:59:49.496342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.392 [2024-07-25 13:59:49.512195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.392 [2024-07-25 13:59:49.512247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.392 [2024-07-25 13:59:49.529196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.392 [2024-07-25 13:59:49.529238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.392 [2024-07-25 13:59:49.546816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.392 [2024-07-25 13:59:49.546857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.393 [2024-07-25 13:59:49.562575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.393 [2024-07-25 13:59:49.562613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.393 [2024-07-25 13:59:49.580641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.393 [2024-07-25 13:59:49.580680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.393 [2024-07-25 13:59:49.596192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.393 [2024-07-25 13:59:49.596228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.393 [2024-07-25 13:59:49.608441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.393 [2024-07-25 13:59:49.608477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.393 [2024-07-25 13:59:49.624542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.393 [2024-07-25 13:59:49.624578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.393 [2024-07-25 13:59:49.641686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.393 [2024-07-25 13:59:49.641719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.393 [2024-07-25 13:59:49.658687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.393 [2024-07-25 13:59:49.658720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.393 [2024-07-25 13:59:49.675096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.393 [2024-07-25 13:59:49.675132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.393 [2024-07-25 13:59:49.692949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.393 [2024-07-25 13:59:49.692986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.708559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.708593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.725202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.725239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.743386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.743423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.758089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.758142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.772997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.773046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.789692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.789733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.804781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.804818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.820085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.820123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.837944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.837980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.853057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.853097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.862049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.862085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.877797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.877833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.894020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.894055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.914876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.914908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.929900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.929930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.941247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.941277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.652 [2024-07-25 13:59:49.956289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.652 [2024-07-25 13:59:49.956338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:49.973531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:49.973576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:49.989355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:49.989391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.000992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.001031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.016748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.016785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.033282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.033330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.049894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.049931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.067295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.067343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.085051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.085091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.100662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.100701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.118275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.118323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.135429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.135469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.151752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.151788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.168579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.168616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.184845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.184880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.913 [2024-07-25 13:59:50.201721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.913 [2024-07-25 13:59:50.201754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.175 [2024-07-25 13:59:50.218816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.175 [2024-07-25 13:59:50.218852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.175 [2024-07-25 13:59:50.236431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.175 [2024-07-25 13:59:50.236472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.175 [2024-07-25 13:59:50.252659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.175 [2024-07-25 13:59:50.252701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.175 [2024-07-25 13:59:50.269463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.175 [2024-07-25 13:59:50.269504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.175 [2024-07-25 13:59:50.286778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.175 [2024-07-25 13:59:50.286818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.175 [2024-07-25 13:59:50.304081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.175 [2024-07-25 13:59:50.304122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.175 [2024-07-25 13:59:50.320353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.175 [2024-07-25 13:59:50.320399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.175 [2024-07-25 13:59:50.337569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.175 [2024-07-25 13:59:50.337612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.176 [2024-07-25 13:59:50.353732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.176 [2024-07-25 13:59:50.353780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.176 [2024-07-25 13:59:50.371210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.176 [2024-07-25 13:59:50.371266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.176 [2024-07-25 13:59:50.387030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.176 [2024-07-25 13:59:50.387077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.176 [2024-07-25 13:59:50.403459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.176 [2024-07-25 13:59:50.403493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.176 [2024-07-25 13:59:50.419982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.176 [2024-07-25 13:59:50.420016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.176 [2024-07-25 13:59:50.430771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.176 [2024-07-25 13:59:50.430806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.176 [2024-07-25 13:59:50.446983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.176 [2024-07-25 13:59:50.447029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.176 [2024-07-25 13:59:50.464013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.176 [2024-07-25 13:59:50.464047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.447 [2024-07-25 13:59:50.480393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.447 [2024-07-25 13:59:50.480427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.447 [2024-07-25 13:59:50.496759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.447 [2024-07-25 13:59:50.496792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.447 [2024-07-25 13:59:50.508313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.447 [2024-07-25 13:59:50.508346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.447 [2024-07-25 13:59:50.523857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.447 [2024-07-25 13:59:50.523892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.447 [2024-07-25 13:59:50.541110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.447 [2024-07-25 13:59:50.541144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.447 [2024-07-25 13:59:50.557604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.447 [2024-07-25 13:59:50.557636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.447 [2024-07-25 13:59:50.574176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.447 [2024-07-25 13:59:50.574209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.447 [2024-07-25 13:59:50.591032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.447 [2024-07-25 13:59:50.591069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.447 [2024-07-25 13:59:50.607504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.448 [2024-07-25 13:59:50.607538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.448 [2024-07-25 13:59:50.622121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.448 [2024-07-25 13:59:50.622162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.448 [2024-07-25 13:59:50.633374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.448 [2024-07-25 13:59:50.633405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.448 [2024-07-25 13:59:50.648342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.448 [2024-07-25 13:59:50.648376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.448 [2024-07-25 13:59:50.658887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.448 [2024-07-25 13:59:50.658919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.448 [2024-07-25 13:59:50.674420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.448 [2024-07-25 13:59:50.674452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.448 [2024-07-25 13:59:50.691018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.448 [2024-07-25 13:59:50.691055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.448 [2024-07-25 13:59:50.707475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.448 [2024-07-25 13:59:50.707511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.448 [2024-07-25 13:59:50.718945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.448 [2024-07-25 13:59:50.718980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.448 [2024-07-25 13:59:50.734846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.448 [2024-07-25 13:59:50.734882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.751865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.751900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.769405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.769439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.785369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.785404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.802337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.802373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.818995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.819032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.836631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.836670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.852845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.852890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.870349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.870399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.888143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.888193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.902502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.902538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.918020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.918061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.933838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.933877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.952204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.952246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.971797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.971839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:50.988787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:50.988828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.708 [2024-07-25 13:59:51.004797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.708 [2024-07-25 13:59:51.004842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.014019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.014055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.023556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.023586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.032884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.032912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.046526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.046553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.061716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.061747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.072846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.072881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.092609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.092649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.108681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.108720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.126919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.126981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.142310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.142364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.157882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.157929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.174946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.174994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.190556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.190603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.208778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.208828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.224509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.224554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.242426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.242469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.253474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.253508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.967 [2024-07-25 13:59:51.261697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.967 [2024-07-25 13:59:51.261727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.227 [2024-07-25 13:59:51.276803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.227 [2024-07-25 13:59:51.276835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.227 [2024-07-25 13:59:51.291775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.227 [2024-07-25 13:59:51.291808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.227 [2024-07-25 13:59:51.303159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.227 [2024-07-25 13:59:51.303192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.227 [2024-07-25 13:59:51.318546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.227 [2024-07-25 13:59:51.318579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.227 [2024-07-25 13:59:51.334968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.227 [2024-07-25 13:59:51.335002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.227 [2024-07-25 13:59:51.352058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.227 [2024-07-25 13:59:51.352091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.227 [2024-07-25 13:59:51.368546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.227 [2024-07-25 13:59:51.368579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.227 [2024-07-25 13:59:51.386506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.227 [2024-07-25 13:59:51.386539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.227 [2024-07-25 13:59:51.401156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.228 [2024-07-25 13:59:51.401188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.228 [2024-07-25 13:59:51.417573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.228 [2024-07-25 13:59:51.417607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.228 [2024-07-25 13:59:51.433875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.228 [2024-07-25 13:59:51.433906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.228 [2024-07-25 13:59:51.444970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.228 [2024-07-25 13:59:51.445003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.228 [2024-07-25 13:59:51.461391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.228 [2024-07-25 13:59:51.461429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.228 [2024-07-25 13:59:51.476532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.228 [2024-07-25 13:59:51.476576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.228 [2024-07-25 13:59:51.487606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.228 [2024-07-25 13:59:51.487649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.228 [2024-07-25 13:59:51.503577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.228 [2024-07-25 13:59:51.503612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.228 [2024-07-25 13:59:51.519754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.228 [2024-07-25 13:59:51.519786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.536294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.536335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.547453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.547484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.562536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.562568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.576685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.576717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.591745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.591774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.607311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.607341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.623886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.623919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.640338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.640367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.651941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.651976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.667482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.667513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.682997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.683031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.697429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.697477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.712653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.712708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.728839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.728891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.740443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.740477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.755659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.755691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.771611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.771654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.488 [2024-07-25 13:59:51.786340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.488 [2024-07-25 13:59:51.786370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.797127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.797177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.812457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.812498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.828432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.828486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.842714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.842747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.857501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.857532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.874982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.875014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.891892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.891922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.907829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.907856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.922093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.922130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.937262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.937316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.953677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.953723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.968237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.968268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.982236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.982277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:51.997387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:51.997434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:52.013366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:52.013405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:52.027987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:52.028036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-07-25 13:59:52.044068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-07-25 13:59:52.044116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.058571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.058616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.073328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.073360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.088342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.088378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.099180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.099216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.113455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.113489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.129077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.129120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.144905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.144947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.159910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.159950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.170532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.170567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.185445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.185491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.201444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.201493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.216047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.216096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.230948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.230981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.247436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.247487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.258799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.258847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.274373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.274412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.289455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.289497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.009 [2024-07-25 13:59:52.304056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.009 [2024-07-25 13:59:52.304104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.318379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.318423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.333280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.333320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.349293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.349338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.364550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.364580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.380684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.380717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.391707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.391742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.407243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.407276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.423612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.423640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.434488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.434516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.449693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.449719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.465194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.465220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.479041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.479068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.494217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.269 [2024-07-25 13:59:52.494246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.269 [2024-07-25 13:59:52.510371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.270 [2024-07-25 13:59:52.510397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.270 [2024-07-25 13:59:52.525692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.270 [2024-07-25 13:59:52.525724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.270 [2024-07-25 13:59:52.540137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.270 [2024-07-25 13:59:52.540166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.270 [2024-07-25 13:59:52.554636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.270 [2024-07-25 13:59:52.554663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.270 [2024-07-25 13:59:52.570287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.270 [2024-07-25 13:59:52.570328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.585415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.585441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.601984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.602019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.618290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.618335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.634550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.634581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.652479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.652512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.667977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.668012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.679374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.679405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.696476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.696508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.713423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.713453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.723339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.723369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.733562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.733593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.743664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.743697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.757222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.757259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.766626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.766658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.776803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.776836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.786980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.527 [2024-07-25 13:59:52.787016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.527 [2024-07-25 13:59:52.796992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.528 [2024-07-25 13:59:52.797028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.528 [2024-07-25 13:59:52.806965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.528 [2024-07-25 13:59:52.806996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.528 [2024-07-25 13:59:52.817068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.528 [2024-07-25 13:59:52.817103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.528 [2024-07-25 13:59:52.829978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.528 [2024-07-25 13:59:52.830041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.785 [2024-07-25 13:59:52.842419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.785 [2024-07-25 13:59:52.842474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.785 [2024-07-25 13:59:52.854651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.785 [2024-07-25 13:59:52.854698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.785 [2024-07-25 13:59:52.866594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.785 [2024-07-25 13:59:52.866641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.785 [2024-07-25 13:59:52.878841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.785 [2024-07-25 13:59:52.878889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.785 [2024-07-25 13:59:52.890672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.785 [2024-07-25 13:59:52.890720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.785 [2024-07-25 13:59:52.902775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.785 [2024-07-25 13:59:52.902821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.785 [2024-07-25 13:59:52.914454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.785 [2024-07-25 13:59:52.914501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.785 [2024-07-25 13:59:52.925869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.785 [2024-07-25 13:59:52.925920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.785 [2024-07-25 13:59:52.938051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.785 [2024-07-25 13:59:52.938103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.786 [2024-07-25 13:59:52.950269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.786 [2024-07-25 13:59:52.950327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.786 [2024-07-25 13:59:52.962121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.786 [2024-07-25 13:59:52.962164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.786 [2024-07-25 13:59:52.975633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.786 [2024-07-25 13:59:52.975675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.786 [2024-07-25 13:59:52.986067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.786 [2024-07-25 13:59:52.986113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.786 [2024-07-25 13:59:52.998989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.786 [2024-07-25 13:59:52.999034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.786 [2024-07-25 13:59:53.014006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.786 [2024-07-25 13:59:53.014070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.786 [2024-07-25 13:59:53.027509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.786 [2024-07-25 13:59:53.027545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.786 [2024-07-25 13:59:53.037892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.786 [2024-07-25 13:59:53.037925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.786 [2024-07-25 13:59:53.048293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.786 [2024-07-25 13:59:53.048353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.786 [2024-07-25 13:59:53.059176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.786 [2024-07-25 13:59:53.059224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.786 [2024-07-25 13:59:53.069653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.786 [2024-07-25 13:59:53.069695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.786 [2024-07-25 13:59:53.079651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.786 [2024-07-25 13:59:53.079694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-07-25 13:59:53.090124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-07-25 13:59:53.090181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-07-25 13:59:53.101743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-07-25 13:59:53.101800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-07-25 13:59:53.113230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-07-25 13:59:53.113289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-07-25 13:59:53.128779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.128826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.138307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.138343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.151215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.151251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.160659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.160691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.171969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.172003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.183710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.183746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.192381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.192411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.204762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.204797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.215051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.215089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.225093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.225143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.235475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.235517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.246180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.246225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.257016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.257058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.268187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.268228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.278910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.278959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.289770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.289820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.302655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.302698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.312203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.312254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.323508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.323560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.335676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.335720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.044 [2024-07-25 13:59:53.344822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.044 [2024-07-25 13:59:53.344861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.358010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.358053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.368479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.368520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.379330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.379374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.391963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.392012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.401734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.401780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.414276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.414331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.424315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.424352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.435109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.435155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.446671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.446719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.456099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.456143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.466935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.466971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.477090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.477128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.487325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.487361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.497577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.497611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.508130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.508167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-07-25 13:59:53.518540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-07-25 13:59:53.518580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.303 [2024-07-25 13:59:53.530565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.303 [2024-07-25 13:59:53.530624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.303 [2024-07-25 13:59:53.541090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.303 [2024-07-25 13:59:53.541124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.303 [2024-07-25 13:59:53.551135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.303 [2024-07-25 13:59:53.551168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.303 [2024-07-25 13:59:53.561250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.303 [2024-07-25 13:59:53.561282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.303 [2024-07-25 13:59:53.570390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.303 [2024-07-25 13:59:53.570426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.303 [2024-07-25 13:59:53.578748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.303 [2024-07-25 13:59:53.578801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.303 [2024-07-25 13:59:53.587182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.303 [2024-07-25 13:59:53.587220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.303 [2024-07-25 13:59:53.600415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.303 [2024-07-25 13:59:53.600459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.615604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.615654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.631247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.631294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.646152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.646180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.661182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.661209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.675815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.675858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.686379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.686407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.700941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.700972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.711824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.711857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.726390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.726430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.744744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.744798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.760819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.760864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.775561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.775600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.790949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.790984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.801476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.801505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.809045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.809076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.824198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.824249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.838227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.838281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.562 [2024-07-25 13:59:53.853025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.562 [2024-07-25 13:59:53.853082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.563 [2024-07-25 13:59:53.864409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.563 [2024-07-25 13:59:53.864454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.825 [2024-07-25 13:59:53.879830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.825 [2024-07-25 13:59:53.879880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.825 [2024-07-25 13:59:53.892250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.825 [2024-07-25 13:59:53.892311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.825 00:11:44.825 Latency(us) 00:11:44.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.825 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:44.825 Nvme1n1 : 5.01 13803.68 107.84 0.00 0.00 9262.81 3348.35 20032.84 00:11:44.825 =================================================================================================================== 00:11:44.825 Total : 13803.68 107.84 0.00 0.00 9262.81 3348.35 20032.84 00:11:44.825 [2024-07-25 13:59:53.903498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.825 [2024-07-25 13:59:53.903540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:53.915476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:53.915517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:53.927436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:53.927465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:53.939418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:53.939448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:53.951392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:53.951423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:53.963376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:53.963409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:53.975379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:53.975416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:53.987380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:53.987413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:53.999378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:53.999407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:54.011373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:54.011407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:54.023364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:54.023407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:54.035320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:54.035349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:54.047292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:54.047332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:54.059276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:54.059316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:54.071258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:54.071289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 [2024-07-25 13:59:54.083226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.826 [2024-07-25 13:59:54.083245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.826 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67062) - No such process 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67062 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.826 delay0 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.826 13:59:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:45.090 [2024-07-25 13:59:54.302951] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:53.268 Initializing NVMe Controllers 00:11:53.268 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:53.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:53.268 Initialization complete. Launching workers. 00:11:53.268 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 267, failed: 23116 00:11:53.268 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 23303, failed to submit 80 00:11:53.268 success 23187, unsuccess 116, failed 0 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:53.268 rmmod nvme_tcp 00:11:53.268 rmmod nvme_fabrics 00:11:53.268 rmmod nvme_keyring 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 66907 ']' 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 66907 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 66907 ']' 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 66907 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66907 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66907' 00:11:53.268 killing process with pid 66907 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 66907 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 66907 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:53.268 00:11:53.268 real 0m25.681s 00:11:53.268 user 0m41.565s 00:11:53.268 sys 0m7.319s 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:53.268 ************************************ 00:11:53.268 END TEST nvmf_zcopy 00:11:53.268 ************************************ 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:53.268 ************************************ 00:11:53.268 START TEST nvmf_nmic 00:11:53.268 ************************************ 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:53.268 * Looking for test storage... 00:11:53.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:53.268 14:00:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:53.268 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:53.269 Cannot find device "nvmf_tgt_br" 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:53.269 Cannot find device "nvmf_tgt_br2" 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:53.269 Cannot find device "nvmf_tgt_br" 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:53.269 Cannot find device "nvmf_tgt_br2" 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:53.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:53.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:53.269 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:53.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:11:53.270 00:11:53.270 --- 10.0.0.2 ping statistics --- 00:11:53.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.270 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:53.270 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:53.270 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:11:53.270 00:11:53.270 --- 10.0.0.3 ping statistics --- 00:11:53.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.270 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:53.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:11:53.270 00:11:53.270 --- 10.0.0.1 ping statistics --- 00:11:53.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.270 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67391 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67391 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 67391 ']' 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.270 14:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:53.270 [2024-07-25 14:00:02.495071] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:11:53.270 [2024-07-25 14:00:02.495138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.533 [2024-07-25 14:00:02.639536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.533 [2024-07-25 14:00:02.745348] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.533 [2024-07-25 14:00:02.745398] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.533 [2024-07-25 14:00:02.745406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.533 [2024-07-25 14:00:02.745411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.533 [2024-07-25 14:00:02.745416] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.533 [2024-07-25 14:00:02.745655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.533 [2024-07-25 14:00:02.745993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.533 [2024-07-25 14:00:02.746056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.533 [2024-07-25 14:00:02.746059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.533 [2024-07-25 14:00:02.788436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:54.102 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.102 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:54.102 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:54.102 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.102 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 [2024-07-25 14:00:03.418888] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 Malloc0 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 [2024-07-25 14:00:03.485469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:54.361 test case1: single bdev can't be used in multiple subsystems 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 [2024-07-25 14:00:03.521261] bdev.c:8108:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:54.361 [2024-07-25 14:00:03.521636] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:54.361 [2024-07-25 14:00:03.521718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.361 request: 00:11:54.361 { 00:11:54.361 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:54.361 "namespace": { 00:11:54.361 "bdev_name": "Malloc0", 00:11:54.361 "no_auto_visible": false 00:11:54.361 }, 00:11:54.361 "method": "nvmf_subsystem_add_ns", 00:11:54.361 "req_id": 1 00:11:54.361 } 00:11:54.361 Got JSON-RPC error response 00:11:54.361 response: 00:11:54.361 { 00:11:54.361 "code": -32602, 00:11:54.361 "message": "Invalid parameters" 00:11:54.361 } 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:54.361 Adding namespace failed - expected result. 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:54.361 test case2: host connect to nvmf target in multiple paths 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 [2024-07-25 14:00:03.537390] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.361 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid=ae1cc223-8955-4554-9c53-a88c4ce7ab12 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.620 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid=ae1cc223-8955-4554-9c53-a88c4ce7ab12 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:54.620 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.620 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.620 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.620 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:54.620 14:00:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:56.525 14:00:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:56.525 14:00:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:56.525 14:00:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.784 14:00:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:56.784 14:00:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.784 14:00:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:56.784 14:00:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:56.784 [global] 00:11:56.784 thread=1 00:11:56.784 invalidate=1 00:11:56.784 rw=write 00:11:56.784 time_based=1 00:11:56.784 runtime=1 00:11:56.784 ioengine=libaio 00:11:56.784 direct=1 00:11:56.784 bs=4096 00:11:56.784 iodepth=1 00:11:56.784 norandommap=0 00:11:56.784 numjobs=1 00:11:56.784 00:11:56.784 verify_dump=1 00:11:56.784 verify_backlog=512 00:11:56.784 verify_state_save=0 00:11:56.784 do_verify=1 00:11:56.784 verify=crc32c-intel 00:11:56.784 [job0] 00:11:56.784 filename=/dev/nvme0n1 00:11:56.784 Could not set queue depth (nvme0n1) 00:11:56.784 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.784 fio-3.35 00:11:56.784 Starting 1 thread 00:11:58.235 00:11:58.235 job0: (groupid=0, jobs=1): err= 0: pid=67484: Thu Jul 25 14:00:07 2024 00:11:58.235 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1001msec) 00:11:58.235 slat (nsec): min=6776, max=41737, avg=10016.08, stdev=2664.91 00:11:58.235 clat (usec): min=110, max=304, avg=156.56, stdev=19.38 00:11:58.235 lat (usec): min=118, max=312, avg=166.58, stdev=19.64 00:11:58.235 clat percentiles (usec): 00:11:58.235 | 1.00th=[ 121], 5.00th=[ 127], 10.00th=[ 133], 20.00th=[ 141], 00:11:58.235 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:11:58.235 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 192], 00:11:58.235 | 99.00th=[ 212], 99.50th=[ 223], 99.90th=[ 231], 99.95th=[ 241], 00:11:58.235 | 99.99th=[ 306] 00:11:58.235 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:58.235 slat (usec): min=10, max=109, avg=15.04, stdev= 4.35 00:11:58.235 clat (usec): min=70, max=1410, avg=97.28, stdev=26.27 00:11:58.235 lat (usec): min=82, max=1426, avg=112.32, stdev=26.92 00:11:58.235 clat percentiles (usec): 00:11:58.235 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 85], 00:11:58.235 | 30.00th=[ 89], 40.00th=[ 92], 50.00th=[ 96], 60.00th=[ 99], 00:11:58.235 | 70.00th=[ 102], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 124], 00:11:58.235 | 99.00th=[ 141], 99.50th=[ 149], 99.90th=[ 178], 99.95th=[ 241], 00:11:58.235 | 99.99th=[ 1418] 00:11:58.235 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:11:58.235 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:58.235 lat (usec) : 100=31.73%, 250=68.24%, 500=0.01% 00:11:58.235 lat (msec) : 2=0.01% 00:11:58.235 cpu : usr=1.20%, sys=7.30%, ctx=7119, majf=0, minf=2 00:11:58.235 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:58.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.235 issued rwts: total=3535,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.235 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:58.235 00:11:58.235 Run status group 0 (all jobs): 00:11:58.235 READ: bw=13.8MiB/s (14.5MB/s), 13.8MiB/s-13.8MiB/s (14.5MB/s-14.5MB/s), io=13.8MiB (14.5MB), run=1001-1001msec 00:11:58.235 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:11:58.235 00:11:58.235 Disk stats (read/write): 00:11:58.235 nvme0n1: ios=3122/3361, merge=0/0, ticks=508/354, in_queue=862, util=91.29% 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:58.235 rmmod nvme_tcp 00:11:58.235 rmmod nvme_fabrics 00:11:58.235 rmmod nvme_keyring 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67391 ']' 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67391 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 67391 ']' 00:11:58.235 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 67391 00:11:58.236 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:58.236 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:58.236 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67391 00:11:58.236 killing process with pid 67391 00:11:58.236 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:58.236 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:58.236 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67391' 00:11:58.236 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 67391 00:11:58.236 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 67391 00:11:58.494 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:58.494 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:58.494 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:58.494 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.494 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:58.495 00:11:58.495 real 0m5.722s 00:11:58.495 user 0m18.419s 00:11:58.495 sys 0m1.913s 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:58.495 ************************************ 00:11:58.495 END TEST nvmf_nmic 00:11:58.495 ************************************ 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:58.495 ************************************ 00:11:58.495 START TEST nvmf_fio_target 00:11:58.495 ************************************ 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:58.495 * Looking for test storage... 00:11:58.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:58.495 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:58.496 Cannot find device "nvmf_tgt_br" 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:58.496 Cannot find device "nvmf_tgt_br2" 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:58.496 Cannot find device "nvmf_tgt_br" 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:58.496 Cannot find device "nvmf_tgt_br2" 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:58.496 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:58.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:58.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:58.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:11:58.753 00:11:58.753 --- 10.0.0.2 ping statistics --- 00:11:58.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.753 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:58.753 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:58.753 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:11:58.753 00:11:58.753 --- 10.0.0.3 ping statistics --- 00:11:58.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.753 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:58.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:58.753 00:11:58.753 --- 10.0.0.1 ping statistics --- 00:11:58.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.753 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.753 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=67662 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 67662 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 67662 ']' 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.754 14:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.754 [2024-07-25 14:00:08.017204] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:11:58.754 [2024-07-25 14:00:08.018003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.011 [2024-07-25 14:00:08.153070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.011 [2024-07-25 14:00:08.271867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.011 [2024-07-25 14:00:08.272020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.011 [2024-07-25 14:00:08.272065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.011 [2024-07-25 14:00:08.272125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.011 [2024-07-25 14:00:08.272170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.011 [2024-07-25 14:00:08.272285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.011 [2024-07-25 14:00:08.272376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.011 [2024-07-25 14:00:08.272373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.011 [2024-07-25 14:00:08.272330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.268 [2024-07-25 14:00:08.321481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:59.834 14:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:59.834 14:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:59.834 14:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:59.834 14:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:59.834 14:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.834 14:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.834 14:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:00.091 [2024-07-25 14:00:09.184239] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.092 14:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:00.348 14:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:00.348 14:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:00.656 14:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:00.656 14:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:00.656 14:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:00.656 14:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:00.914 14:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:00.914 14:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:01.172 14:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:01.431 14:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:01.431 14:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:01.690 14:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:01.690 14:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:01.949 14:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:01.949 14:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:02.208 14:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:02.479 14:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:02.479 14:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:02.743 14:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:02.743 14:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.002 14:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.303 [2024-07-25 14:00:12.313932] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.303 14:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:03.303 14:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:03.562 14:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid=ae1cc223-8955-4554-9c53-a88c4ce7ab12 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.562 14:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:03.562 14:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:03.562 14:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.562 14:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:03.562 14:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:03.562 14:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:06.092 14:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:06.092 14:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:06.092 14:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.092 14:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:06.092 14:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.092 14:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:06.092 14:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:06.092 [global] 00:12:06.092 thread=1 00:12:06.092 invalidate=1 00:12:06.092 rw=write 00:12:06.092 time_based=1 00:12:06.092 runtime=1 00:12:06.092 ioengine=libaio 00:12:06.092 direct=1 00:12:06.092 bs=4096 00:12:06.092 iodepth=1 00:12:06.092 norandommap=0 00:12:06.092 numjobs=1 00:12:06.092 00:12:06.092 verify_dump=1 00:12:06.092 verify_backlog=512 00:12:06.092 verify_state_save=0 00:12:06.092 do_verify=1 00:12:06.092 verify=crc32c-intel 00:12:06.092 [job0] 00:12:06.092 filename=/dev/nvme0n1 00:12:06.092 [job1] 00:12:06.092 filename=/dev/nvme0n2 00:12:06.092 [job2] 00:12:06.092 filename=/dev/nvme0n3 00:12:06.092 [job3] 00:12:06.092 filename=/dev/nvme0n4 00:12:06.092 Could not set queue depth (nvme0n1) 00:12:06.092 Could not set queue depth (nvme0n2) 00:12:06.092 Could not set queue depth (nvme0n3) 00:12:06.092 Could not set queue depth (nvme0n4) 00:12:06.092 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.092 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.092 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.092 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.092 fio-3.35 00:12:06.092 Starting 4 threads 00:12:07.025 00:12:07.025 job0: (groupid=0, jobs=1): err= 0: pid=67845: Thu Jul 25 14:00:16 2024 00:12:07.025 read: IOPS=3445, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1001msec) 00:12:07.025 slat (nsec): min=6705, max=33103, avg=8339.31, stdev=1640.65 00:12:07.025 clat (usec): min=120, max=246, avg=149.84, stdev=13.24 00:12:07.025 lat (usec): min=128, max=253, avg=158.18, stdev=13.54 00:12:07.025 clat percentiles (usec): 00:12:07.025 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:12:07.025 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:12:07.025 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 174], 00:12:07.025 | 99.00th=[ 196], 99.50th=[ 206], 99.90th=[ 225], 99.95th=[ 243], 00:12:07.025 | 99.99th=[ 247] 00:12:07.025 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:07.025 slat (usec): min=8, max=260, avg=13.85, stdev= 7.09 00:12:07.025 clat (usec): min=78, max=1410, avg=110.71, stdev=24.92 00:12:07.025 lat (usec): min=89, max=1421, avg=124.56, stdev=26.57 00:12:07.025 clat percentiles (usec): 00:12:07.025 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 98], 20.00th=[ 101], 00:12:07.025 | 30.00th=[ 103], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 112], 00:12:07.025 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 127], 95.00th=[ 133], 00:12:07.025 | 99.00th=[ 147], 99.50th=[ 155], 99.90th=[ 178], 99.95th=[ 306], 00:12:07.025 | 99.99th=[ 1418] 00:12:07.025 bw ( KiB/s): min=16168, max=16168, per=35.82%, avg=16168.00, stdev= 0.00, samples=1 00:12:07.025 iops : min= 4042, max= 4042, avg=4042.00, stdev= 0.00, samples=1 00:12:07.025 lat (usec) : 100=8.72%, 250=91.26%, 500=0.01% 00:12:07.025 lat (msec) : 2=0.01% 00:12:07.025 cpu : usr=1.20%, sys=6.50%, ctx=7035, majf=0, minf=7 00:12:07.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.025 issued rwts: total=3449,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.025 job1: (groupid=0, jobs=1): err= 0: pid=67846: Thu Jul 25 14:00:16 2024 00:12:07.025 read: IOPS=3326, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1001msec) 00:12:07.025 slat (nsec): min=6996, max=98840, avg=9743.88, stdev=5823.82 00:12:07.025 clat (usec): min=122, max=590, avg=153.44, stdev=17.12 00:12:07.025 lat (usec): min=129, max=598, avg=163.19, stdev=19.34 00:12:07.025 clat percentiles (usec): 00:12:07.025 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:12:07.025 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:12:07.025 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 174], 95.00th=[ 184], 00:12:07.025 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 258], 99.95th=[ 306], 00:12:07.025 | 99.99th=[ 594] 00:12:07.025 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:07.025 slat (usec): min=9, max=195, avg=14.52, stdev= 6.60 00:12:07.025 clat (usec): min=71, max=289, avg=110.51, stdev=12.81 00:12:07.025 lat (usec): min=82, max=408, avg=125.03, stdev=15.86 00:12:07.025 clat percentiles (usec): 00:12:07.025 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 98], 20.00th=[ 101], 00:12:07.025 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 112], 00:12:07.025 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 127], 95.00th=[ 135], 00:12:07.025 | 99.00th=[ 151], 99.50th=[ 159], 99.90th=[ 180], 99.95th=[ 215], 00:12:07.025 | 99.99th=[ 289] 00:12:07.025 bw ( KiB/s): min=16304, max=16304, per=36.13%, avg=16304.00, stdev= 0.00, samples=1 00:12:07.025 iops : min= 4076, max= 4076, avg=4076.00, stdev= 0.00, samples=1 00:12:07.025 lat (usec) : 100=9.17%, 250=90.74%, 500=0.07%, 750=0.01% 00:12:07.025 cpu : usr=1.20%, sys=7.20%, ctx=6915, majf=0, minf=10 00:12:07.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.026 issued rwts: total=3330,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.026 job2: (groupid=0, jobs=1): err= 0: pid=67847: Thu Jul 25 14:00:16 2024 00:12:07.026 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:07.026 slat (nsec): min=5153, max=38029, avg=8118.70, stdev=2608.23 00:12:07.026 clat (usec): min=209, max=771, avg=256.65, stdev=26.31 00:12:07.026 lat (usec): min=217, max=778, avg=264.77, stdev=26.41 00:12:07.026 clat percentiles (usec): 00:12:07.026 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 241], 00:12:07.026 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:12:07.026 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 306], 00:12:07.026 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 383], 99.95th=[ 400], 00:12:07.026 | 99.99th=[ 775] 00:12:07.026 write: IOPS=2058, BW=8236KiB/s (8433kB/s)(8244KiB/1001msec); 0 zone resets 00:12:07.026 slat (usec): min=6, max=276, avg=14.69, stdev=17.49 00:12:07.026 clat (usec): min=2, max=853, avg=205.22, stdev=31.16 00:12:07.026 lat (usec): min=172, max=863, avg=219.92, stdev=31.82 00:12:07.026 clat percentiles (usec): 00:12:07.026 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 188], 00:12:07.026 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:12:07.026 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 253], 00:12:07.026 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 429], 99.95th=[ 465], 00:12:07.026 | 99.99th=[ 857] 00:12:07.026 bw ( KiB/s): min= 8224, max= 8224, per=18.22%, avg=8224.00, stdev= 0.00, samples=1 00:12:07.026 iops : min= 2056, max= 2056, avg=2056.00, stdev= 0.00, samples=1 00:12:07.026 lat (usec) : 4=0.12%, 10=0.02%, 100=0.02%, 250=68.82%, 500=30.96% 00:12:07.026 lat (usec) : 1000=0.05% 00:12:07.026 cpu : usr=1.00%, sys=3.90%, ctx=4135, majf=0, minf=7 00:12:07.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.026 issued rwts: total=2048,2061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.026 job3: (groupid=0, jobs=1): err= 0: pid=67848: Thu Jul 25 14:00:16 2024 00:12:07.026 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:07.026 slat (nsec): min=5079, max=44792, avg=7957.83, stdev=2870.53 00:12:07.026 clat (usec): min=209, max=749, avg=256.79, stdev=26.15 00:12:07.026 lat (usec): min=217, max=757, avg=264.75, stdev=26.21 00:12:07.026 clat percentiles (usec): 00:12:07.026 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 241], 00:12:07.026 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:12:07.026 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 310], 00:12:07.026 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 375], 99.95th=[ 396], 00:12:07.026 | 99.99th=[ 750] 00:12:07.026 write: IOPS=2062, BW=8252KiB/s (8450kB/s)(8260KiB/1001msec); 0 zone resets 00:12:07.026 slat (usec): min=6, max=235, avg=13.44, stdev=13.95 00:12:07.026 clat (usec): min=3, max=913, avg=206.11, stdev=30.96 00:12:07.026 lat (usec): min=93, max=927, avg=219.55, stdev=31.92 00:12:07.026 clat percentiles (usec): 00:12:07.026 | 1.00th=[ 149], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 188], 00:12:07.026 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:12:07.026 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 255], 00:12:07.026 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 416], 99.95th=[ 453], 00:12:07.026 | 99.99th=[ 914] 00:12:07.026 bw ( KiB/s): min= 8240, max= 8240, per=18.26%, avg=8240.00, stdev= 0.00, samples=1 00:12:07.026 iops : min= 2060, max= 2060, avg=2060.00, stdev= 0.00, samples=1 00:12:07.026 lat (usec) : 4=0.02%, 50=0.05%, 100=0.10%, 250=68.71%, 500=31.07% 00:12:07.026 lat (usec) : 750=0.02%, 1000=0.02% 00:12:07.026 cpu : usr=0.90%, sys=3.70%, ctx=4136, majf=0, minf=11 00:12:07.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.026 issued rwts: total=2048,2065,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.026 00:12:07.026 Run status group 0 (all jobs): 00:12:07.026 READ: bw=42.4MiB/s (44.5MB/s), 8184KiB/s-13.5MiB/s (8380kB/s-14.1MB/s), io=42.5MiB (44.5MB), run=1001-1001msec 00:12:07.026 WRITE: bw=44.1MiB/s (46.2MB/s), 8236KiB/s-14.0MiB/s (8433kB/s-14.7MB/s), io=44.1MiB (46.3MB), run=1001-1001msec 00:12:07.026 00:12:07.026 Disk stats (read/write): 00:12:07.026 nvme0n1: ios=3117/3072, merge=0/0, ticks=499/358, in_queue=857, util=90.28% 00:12:07.026 nvme0n2: ios=3085/3072, merge=0/0, ticks=469/359, in_queue=828, util=89.62% 00:12:07.026 nvme0n3: ios=1588/2048, merge=0/0, ticks=411/414, in_queue=825, util=89.60% 00:12:07.026 nvme0n4: ios=1605/2048, merge=0/0, ticks=431/408, in_queue=839, util=90.29% 00:12:07.026 14:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:07.026 [global] 00:12:07.026 thread=1 00:12:07.026 invalidate=1 00:12:07.026 rw=randwrite 00:12:07.026 time_based=1 00:12:07.026 runtime=1 00:12:07.026 ioengine=libaio 00:12:07.026 direct=1 00:12:07.026 bs=4096 00:12:07.026 iodepth=1 00:12:07.026 norandommap=0 00:12:07.026 numjobs=1 00:12:07.026 00:12:07.026 verify_dump=1 00:12:07.026 verify_backlog=512 00:12:07.026 verify_state_save=0 00:12:07.026 do_verify=1 00:12:07.026 verify=crc32c-intel 00:12:07.026 [job0] 00:12:07.026 filename=/dev/nvme0n1 00:12:07.026 [job1] 00:12:07.026 filename=/dev/nvme0n2 00:12:07.026 [job2] 00:12:07.026 filename=/dev/nvme0n3 00:12:07.026 [job3] 00:12:07.026 filename=/dev/nvme0n4 00:12:07.284 Could not set queue depth (nvme0n1) 00:12:07.284 Could not set queue depth (nvme0n2) 00:12:07.284 Could not set queue depth (nvme0n3) 00:12:07.284 Could not set queue depth (nvme0n4) 00:12:07.284 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:07.284 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:07.284 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:07.284 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:07.284 fio-3.35 00:12:07.284 Starting 4 threads 00:12:08.664 00:12:08.664 job0: (groupid=0, jobs=1): err= 0: pid=67902: Thu Jul 25 14:00:17 2024 00:12:08.664 read: IOPS=3318, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1001msec) 00:12:08.664 slat (nsec): min=6815, max=29263, avg=8657.34, stdev=2142.16 00:12:08.664 clat (usec): min=120, max=2053, avg=154.45, stdev=37.69 00:12:08.664 lat (usec): min=127, max=2066, avg=163.10, stdev=37.91 00:12:08.664 clat percentiles (usec): 00:12:08.664 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 143], 00:12:08.664 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:12:08.664 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 178], 00:12:08.664 | 99.00th=[ 206], 99.50th=[ 221], 99.90th=[ 482], 99.95th=[ 537], 00:12:08.664 | 99.99th=[ 2057] 00:12:08.664 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:08.664 slat (usec): min=10, max=127, avg=14.05, stdev= 7.13 00:12:08.664 clat (usec): min=83, max=1678, avg=111.55, stdev=29.88 00:12:08.664 lat (usec): min=95, max=1690, avg=125.60, stdev=31.59 00:12:08.664 clat percentiles (usec): 00:12:08.664 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 98], 20.00th=[ 101], 00:12:08.664 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 112], 00:12:08.664 | 70.00th=[ 116], 80.00th=[ 121], 90.00th=[ 128], 95.00th=[ 137], 00:12:08.664 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 231], 99.95th=[ 420], 00:12:08.664 | 99.99th=[ 1680] 00:12:08.664 bw ( KiB/s): min=15184, max=15184, per=34.35%, avg=15184.00, stdev= 0.00, samples=1 00:12:08.664 iops : min= 3796, max= 3796, avg=3796.00, stdev= 0.00, samples=1 00:12:08.664 lat (usec) : 100=9.34%, 250=90.52%, 500=0.10%, 750=0.01% 00:12:08.664 lat (msec) : 2=0.01%, 4=0.01% 00:12:08.664 cpu : usr=1.70%, sys=6.10%, ctx=6906, majf=0, minf=9 00:12:08.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:08.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.664 issued rwts: total=3322,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:08.664 job1: (groupid=0, jobs=1): err= 0: pid=67903: Thu Jul 25 14:00:17 2024 00:12:08.664 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:08.664 slat (nsec): min=6895, max=98822, avg=12524.34, stdev=9179.71 00:12:08.664 clat (usec): min=128, max=1734, avg=286.19, stdev=87.25 00:12:08.664 lat (usec): min=136, max=1743, avg=298.72, stdev=89.88 00:12:08.664 clat percentiles (usec): 00:12:08.664 | 1.00th=[ 139], 5.00th=[ 151], 10.00th=[ 178], 20.00th=[ 237], 00:12:08.664 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 277], 00:12:08.664 | 70.00th=[ 330], 80.00th=[ 355], 90.00th=[ 375], 95.00th=[ 445], 00:12:08.664 | 99.00th=[ 523], 99.50th=[ 570], 99.90th=[ 652], 99.95th=[ 709], 00:12:08.664 | 99.99th=[ 1729] 00:12:08.664 write: IOPS=2095, BW=8384KiB/s (8585kB/s)(8392KiB/1001msec); 0 zone resets 00:12:08.664 slat (usec): min=10, max=139, avg=16.64, stdev= 9.58 00:12:08.664 clat (usec): min=88, max=358, avg=165.16, stdev=43.77 00:12:08.664 lat (usec): min=100, max=497, avg=181.80, stdev=46.33 00:12:08.664 clat percentiles (usec): 00:12:08.664 | 1.00th=[ 99], 5.00th=[ 108], 10.00th=[ 112], 20.00th=[ 118], 00:12:08.664 | 30.00th=[ 124], 40.00th=[ 137], 50.00th=[ 182], 60.00th=[ 192], 00:12:08.664 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 229], 00:12:08.664 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 302], 99.95th=[ 338], 00:12:08.664 | 99.99th=[ 359] 00:12:08.664 bw ( KiB/s): min= 8680, max= 8680, per=19.64%, avg=8680.00, stdev= 0.00, samples=1 00:12:08.664 iops : min= 2170, max= 2170, avg=2170.00, stdev= 0.00, samples=1 00:12:08.664 lat (usec) : 100=0.65%, 250=65.34%, 500=33.19%, 750=0.80% 00:12:08.664 lat (msec) : 2=0.02% 00:12:08.664 cpu : usr=1.40%, sys=4.80%, ctx=4146, majf=0, minf=15 00:12:08.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:08.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.664 issued rwts: total=2048,2098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:08.664 job2: (groupid=0, jobs=1): err= 0: pid=67904: Thu Jul 25 14:00:17 2024 00:12:08.664 read: IOPS=1701, BW=6805KiB/s (6969kB/s)(6812KiB/1001msec) 00:12:08.664 slat (usec): min=7, max=120, avg=13.77, stdev= 9.03 00:12:08.664 clat (usec): min=164, max=1142, avg=302.01, stdev=62.31 00:12:08.664 lat (usec): min=179, max=1173, avg=315.78, stdev=65.77 00:12:08.664 clat percentiles (usec): 00:12:08.664 | 1.00th=[ 212], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 255], 00:12:08.664 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 314], 00:12:08.664 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 412], 00:12:08.664 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 635], 99.95th=[ 1139], 00:12:08.664 | 99.99th=[ 1139] 00:12:08.664 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:08.664 slat (usec): min=11, max=186, avg=23.04, stdev=12.40 00:12:08.664 clat (usec): min=98, max=6449, avg=199.38, stdev=235.76 00:12:08.664 lat (usec): min=110, max=6461, avg=222.42, stdev=237.23 00:12:08.664 clat percentiles (usec): 00:12:08.664 | 1.00th=[ 114], 5.00th=[ 121], 10.00th=[ 126], 20.00th=[ 135], 00:12:08.664 | 30.00th=[ 145], 40.00th=[ 174], 50.00th=[ 186], 60.00th=[ 198], 00:12:08.664 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 281], 95.00th=[ 306], 00:12:08.664 | 99.00th=[ 338], 99.50th=[ 367], 99.90th=[ 4948], 99.95th=[ 4948], 00:12:08.664 | 99.99th=[ 6456] 00:12:08.664 bw ( KiB/s): min= 8192, max= 8192, per=18.53%, avg=8192.00, stdev= 0.00, samples=1 00:12:08.664 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:08.664 lat (usec) : 100=0.03%, 250=54.81%, 500=44.63%, 750=0.29%, 1000=0.05% 00:12:08.664 lat (msec) : 2=0.03%, 4=0.08%, 10=0.08% 00:12:08.664 cpu : usr=1.10%, sys=5.70%, ctx=3752, majf=0, minf=12 00:12:08.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:08.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.664 issued rwts: total=1703,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:08.664 job3: (groupid=0, jobs=1): err= 0: pid=67905: Thu Jul 25 14:00:17 2024 00:12:08.664 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:12:08.664 slat (nsec): min=6556, max=36829, avg=9085.38, stdev=2576.49 00:12:08.664 clat (usec): min=132, max=238, avg=162.68, stdev=13.35 00:12:08.664 lat (usec): min=139, max=245, avg=171.77, stdev=14.35 00:12:08.664 clat percentiles (usec): 00:12:08.664 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:12:08.664 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:12:08.664 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 188], 00:12:08.664 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 219], 99.95th=[ 225], 00:12:08.664 | 99.99th=[ 239] 00:12:08.664 write: IOPS=3327, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1001msec); 0 zone resets 00:12:08.664 slat (usec): min=8, max=113, avg=15.75, stdev= 7.53 00:12:08.664 clat (usec): min=92, max=256, avg=123.52, stdev=12.65 00:12:08.664 lat (usec): min=104, max=370, avg=139.27, stdev=16.56 00:12:08.664 clat percentiles (usec): 00:12:08.664 | 1.00th=[ 102], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 114], 00:12:08.665 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 126], 00:12:08.665 | 70.00th=[ 129], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:12:08.665 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 180], 99.95th=[ 215], 00:12:08.665 | 99.99th=[ 258] 00:12:08.665 bw ( KiB/s): min=13392, max=13392, per=30.30%, avg=13392.00, stdev= 0.00, samples=1 00:12:08.665 iops : min= 3348, max= 3348, avg=3348.00, stdev= 0.00, samples=1 00:12:08.665 lat (usec) : 100=0.19%, 250=99.80%, 500=0.02% 00:12:08.665 cpu : usr=1.60%, sys=6.50%, ctx=6407, majf=0, minf=9 00:12:08.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:08.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.665 issued rwts: total=3072,3331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:08.665 00:12:08.665 Run status group 0 (all jobs): 00:12:08.665 READ: bw=39.6MiB/s (41.5MB/s), 6805KiB/s-13.0MiB/s (6969kB/s-13.6MB/s), io=39.6MiB (41.6MB), run=1001-1001msec 00:12:08.665 WRITE: bw=43.2MiB/s (45.3MB/s), 8184KiB/s-14.0MiB/s (8380kB/s-14.7MB/s), io=43.2MiB (45.3MB), run=1001-1001msec 00:12:08.665 00:12:08.665 Disk stats (read/write): 00:12:08.665 nvme0n1: ios=2978/3072, merge=0/0, ticks=468/359, in_queue=827, util=88.99% 00:12:08.665 nvme0n2: ios=1784/2048, merge=0/0, ticks=501/352, in_queue=853, util=88.73% 00:12:08.665 nvme0n3: ios=1557/1609, merge=0/0, ticks=501/342, in_queue=843, util=88.47% 00:12:08.665 nvme0n4: ios=2598/3072, merge=0/0, ticks=432/395, in_queue=827, util=89.93% 00:12:08.665 14:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:08.665 [global] 00:12:08.665 thread=1 00:12:08.665 invalidate=1 00:12:08.665 rw=write 00:12:08.665 time_based=1 00:12:08.665 runtime=1 00:12:08.665 ioengine=libaio 00:12:08.665 direct=1 00:12:08.665 bs=4096 00:12:08.665 iodepth=128 00:12:08.665 norandommap=0 00:12:08.665 numjobs=1 00:12:08.665 00:12:08.665 verify_dump=1 00:12:08.665 verify_backlog=512 00:12:08.665 verify_state_save=0 00:12:08.665 do_verify=1 00:12:08.665 verify=crc32c-intel 00:12:08.665 [job0] 00:12:08.665 filename=/dev/nvme0n1 00:12:08.665 [job1] 00:12:08.665 filename=/dev/nvme0n2 00:12:08.665 [job2] 00:12:08.665 filename=/dev/nvme0n3 00:12:08.665 [job3] 00:12:08.665 filename=/dev/nvme0n4 00:12:08.665 Could not set queue depth (nvme0n1) 00:12:08.665 Could not set queue depth (nvme0n2) 00:12:08.665 Could not set queue depth (nvme0n3) 00:12:08.665 Could not set queue depth (nvme0n4) 00:12:08.665 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.665 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.665 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.665 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.665 fio-3.35 00:12:08.665 Starting 4 threads 00:12:10.042 00:12:10.042 job0: (groupid=0, jobs=1): err= 0: pid=67958: Thu Jul 25 14:00:19 2024 00:12:10.042 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:12:10.042 slat (usec): min=7, max=6671, avg=99.59, stdev=424.75 00:12:10.042 clat (usec): min=9614, max=18571, avg=13504.45, stdev=933.16 00:12:10.042 lat (usec): min=10179, max=18597, avg=13604.04, stdev=842.26 00:12:10.042 clat percentiles (usec): 00:12:10.042 | 1.00th=[10945], 5.00th=[12649], 10.00th=[13042], 20.00th=[13173], 00:12:10.042 | 30.00th=[13304], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:12:10.042 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14222], 00:12:10.042 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:12:10.042 | 99.99th=[18482] 00:12:10.042 write: IOPS=5025, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1001msec); 0 zone resets 00:12:10.042 slat (usec): min=6, max=5280, avg=98.02, stdev=358.10 00:12:10.042 clat (usec): min=520, max=15641, avg=12713.32, stdev=1233.79 00:12:10.042 lat (usec): min=566, max=15685, avg=12811.34, stdev=1189.35 00:12:10.042 clat percentiles (usec): 00:12:10.042 | 1.00th=[ 6128], 5.00th=[11207], 10.00th=[12256], 20.00th=[12518], 00:12:10.042 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[12911], 00:12:10.042 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:12:10.042 | 99.00th=[15270], 99.50th=[15533], 99.90th=[15664], 99.95th=[15664], 00:12:10.042 | 99.99th=[15664] 00:12:10.042 bw ( KiB/s): min=20480, max=20480, per=27.24%, avg=20480.00, stdev= 0.00, samples=1 00:12:10.042 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:10.042 lat (usec) : 750=0.05%, 1000=0.02% 00:12:10.042 lat (msec) : 4=0.33%, 10=0.71%, 20=98.89% 00:12:10.042 cpu : usr=4.40%, sys=20.50%, ctx=348, majf=0, minf=11 00:12:10.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:10.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.042 issued rwts: total=4608,5031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.042 job1: (groupid=0, jobs=1): err= 0: pid=67959: Thu Jul 25 14:00:19 2024 00:12:10.042 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:12:10.042 slat (usec): min=6, max=3145, avg=100.42, stdev=386.57 00:12:10.042 clat (usec): min=10011, max=14952, avg=13652.76, stdev=569.52 00:12:10.042 lat (usec): min=12122, max=14973, avg=13753.19, stdev=425.94 00:12:10.042 clat percentiles (usec): 00:12:10.042 | 1.00th=[11207], 5.00th=[12780], 10.00th=[13042], 20.00th=[13304], 00:12:10.042 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:12:10.042 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14222], 95.00th=[14353], 00:12:10.042 | 99.00th=[14746], 99.50th=[14746], 99.90th=[15008], 99.95th=[15008], 00:12:10.042 | 99.99th=[15008] 00:12:10.042 write: IOPS=4744, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1002msec); 0 zone resets 00:12:10.042 slat (usec): min=6, max=15028, avg=102.70, stdev=414.28 00:12:10.042 clat (usec): min=134, max=26046, avg=13317.21, stdev=2415.51 00:12:10.042 lat (usec): min=2037, max=26079, avg=13419.90, stdev=2396.24 00:12:10.042 clat percentiles (usec): 00:12:10.042 | 1.00th=[ 5866], 5.00th=[11994], 10.00th=[12387], 20.00th=[12649], 00:12:10.042 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:12:10.042 | 70.00th=[13435], 80.00th=[13698], 90.00th=[13829], 95.00th=[14222], 00:12:10.042 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:12:10.042 | 99.99th=[26084] 00:12:10.042 bw ( KiB/s): min=17516, max=19464, per=24.59%, avg=18490.00, stdev=1377.44, samples=2 00:12:10.042 iops : min= 4379, max= 4866, avg=4622.50, stdev=344.36, samples=2 00:12:10.042 lat (usec) : 250=0.01% 00:12:10.042 lat (msec) : 4=0.34%, 10=0.70%, 20=97.59%, 50=1.36% 00:12:10.042 cpu : usr=6.09%, sys=20.38%, ctx=359, majf=0, minf=13 00:12:10.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:10.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.042 issued rwts: total=4608,4754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.042 job2: (groupid=0, jobs=1): err= 0: pid=67960: Thu Jul 25 14:00:19 2024 00:12:10.042 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:12:10.042 slat (usec): min=6, max=3581, avg=114.23, stdev=449.14 00:12:10.042 clat (usec): min=10930, max=17940, avg=15070.67, stdev=749.06 00:12:10.042 lat (usec): min=11447, max=17957, avg=15184.90, stdev=611.82 00:12:10.042 clat percentiles (usec): 00:12:10.042 | 1.00th=[12256], 5.00th=[13304], 10.00th=[14091], 20.00th=[14877], 00:12:10.042 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15270], 60.00th=[15270], 00:12:10.042 | 70.00th=[15401], 80.00th=[15533], 90.00th=[15664], 95.00th=[15795], 00:12:10.042 | 99.00th=[16319], 99.50th=[16581], 99.90th=[16909], 99.95th=[16909], 00:12:10.042 | 99.99th=[17957] 00:12:10.042 write: IOPS=4545, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1002msec); 0 zone resets 00:12:10.042 slat (usec): min=8, max=3441, avg=110.77, stdev=479.42 00:12:10.042 clat (usec): min=362, max=18068, avg=14196.76, stdev=1366.66 00:12:10.042 lat (usec): min=3451, max=18083, avg=14307.53, stdev=1297.85 00:12:10.042 clat percentiles (usec): 00:12:10.042 | 1.00th=[ 7373], 5.00th=[12256], 10.00th=[13566], 20.00th=[13960], 00:12:10.042 | 30.00th=[14091], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:12:10.042 | 70.00th=[14746], 80.00th=[14877], 90.00th=[15139], 95.00th=[15401], 00:12:10.042 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17171], 99.95th=[17695], 00:12:10.042 | 99.99th=[17957] 00:12:10.042 bw ( KiB/s): min=17240, max=17240, per=22.93%, avg=17240.00, stdev= 0.00, samples=1 00:12:10.042 iops : min= 4310, max= 4310, avg=4310.00, stdev= 0.00, samples=1 00:12:10.042 lat (usec) : 500=0.01% 00:12:10.042 lat (msec) : 4=0.37%, 10=0.37%, 20=99.25% 00:12:10.042 cpu : usr=2.90%, sys=12.89%, ctx=403, majf=0, minf=11 00:12:10.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:10.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.042 issued rwts: total=4096,4555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.042 job3: (groupid=0, jobs=1): err= 0: pid=67961: Thu Jul 25 14:00:19 2024 00:12:10.042 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:12:10.042 slat (usec): min=3, max=6246, avg=116.00, stdev=500.07 00:12:10.043 clat (usec): min=9679, max=21186, avg=15376.78, stdev=1047.17 00:12:10.043 lat (usec): min=11687, max=21208, avg=15492.78, stdev=1058.17 00:12:10.043 clat percentiles (usec): 00:12:10.043 | 1.00th=[12256], 5.00th=[13566], 10.00th=[14091], 20.00th=[15008], 00:12:10.043 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:12:10.043 | 70.00th=[15664], 80.00th=[15795], 90.00th=[16319], 95.00th=[17171], 00:12:10.043 | 99.00th=[18744], 99.50th=[19006], 99.90th=[20055], 99.95th=[20317], 00:12:10.043 | 99.99th=[21103] 00:12:10.043 write: IOPS=4487, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1002msec); 0 zone resets 00:12:10.043 slat (usec): min=6, max=6775, avg=107.77, stdev=586.08 00:12:10.043 clat (usec): min=1550, max=21249, avg=14183.59, stdev=1569.00 00:12:10.043 lat (usec): min=1591, max=21276, avg=14291.36, stdev=1657.05 00:12:10.043 clat percentiles (usec): 00:12:10.043 | 1.00th=[ 8160], 5.00th=[11863], 10.00th=[13173], 20.00th=[13698], 00:12:10.043 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:12:10.043 | 70.00th=[14615], 80.00th=[14746], 90.00th=[15008], 95.00th=[15926], 00:12:10.043 | 99.00th=[18482], 99.50th=[19268], 99.90th=[20841], 99.95th=[21103], 00:12:10.043 | 99.99th=[21365] 00:12:10.043 bw ( KiB/s): min=17109, max=17109, per=22.75%, avg=17109.00, stdev= 0.00, samples=1 00:12:10.043 iops : min= 4277, max= 4277, avg=4277.00, stdev= 0.00, samples=1 00:12:10.043 lat (msec) : 2=0.12%, 4=0.08%, 10=0.84%, 20=98.79%, 50=0.17% 00:12:10.043 cpu : usr=3.80%, sys=16.88%, ctx=260, majf=0, minf=15 00:12:10.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:10.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.043 issued rwts: total=4096,4496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.043 00:12:10.043 Run status group 0 (all jobs): 00:12:10.043 READ: bw=67.9MiB/s (71.2MB/s), 16.0MiB/s-18.0MiB/s (16.7MB/s-18.9MB/s), io=68.0MiB (71.3MB), run=1001-1002msec 00:12:10.043 WRITE: bw=73.4MiB/s (77.0MB/s), 17.5MiB/s-19.6MiB/s (18.4MB/s-20.6MB/s), io=73.6MiB (77.2MB), run=1001-1002msec 00:12:10.043 00:12:10.043 Disk stats (read/write): 00:12:10.043 nvme0n1: ios=4146/4384, merge=0/0, ticks=12008/11415, in_queue=23423, util=90.21% 00:12:10.043 nvme0n2: ios=4145/4113, merge=0/0, ticks=12551/10952, in_queue=23503, util=90.26% 00:12:10.043 nvme0n3: ios=3616/4042, merge=0/0, ticks=12644/12283, in_queue=24927, util=90.16% 00:12:10.043 nvme0n4: ios=3615/3974, merge=0/0, ticks=26483/23024, in_queue=49507, util=90.84% 00:12:10.043 14:00:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:10.043 [global] 00:12:10.043 thread=1 00:12:10.043 invalidate=1 00:12:10.043 rw=randwrite 00:12:10.043 time_based=1 00:12:10.043 runtime=1 00:12:10.043 ioengine=libaio 00:12:10.043 direct=1 00:12:10.043 bs=4096 00:12:10.043 iodepth=128 00:12:10.043 norandommap=0 00:12:10.043 numjobs=1 00:12:10.043 00:12:10.043 verify_dump=1 00:12:10.043 verify_backlog=512 00:12:10.043 verify_state_save=0 00:12:10.043 do_verify=1 00:12:10.043 verify=crc32c-intel 00:12:10.043 [job0] 00:12:10.043 filename=/dev/nvme0n1 00:12:10.043 [job1] 00:12:10.043 filename=/dev/nvme0n2 00:12:10.043 [job2] 00:12:10.043 filename=/dev/nvme0n3 00:12:10.043 [job3] 00:12:10.043 filename=/dev/nvme0n4 00:12:10.043 Could not set queue depth (nvme0n1) 00:12:10.043 Could not set queue depth (nvme0n2) 00:12:10.043 Could not set queue depth (nvme0n3) 00:12:10.043 Could not set queue depth (nvme0n4) 00:12:10.043 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.043 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.043 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.043 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.043 fio-3.35 00:12:10.043 Starting 4 threads 00:12:11.423 00:12:11.423 job0: (groupid=0, jobs=1): err= 0: pid=68021: Thu Jul 25 14:00:20 2024 00:12:11.423 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:12:11.423 slat (usec): min=2, max=10770, avg=166.83, stdev=793.20 00:12:11.423 clat (usec): min=12222, max=59030, avg=22451.22, stdev=5646.89 00:12:11.423 lat (usec): min=12251, max=61890, avg=22618.05, stdev=5696.88 00:12:11.423 clat percentiles (usec): 00:12:11.423 | 1.00th=[14222], 5.00th=[15401], 10.00th=[15795], 20.00th=[17695], 00:12:11.423 | 30.00th=[21627], 40.00th=[22152], 50.00th=[22414], 60.00th=[22676], 00:12:11.423 | 70.00th=[22938], 80.00th=[24249], 90.00th=[28181], 95.00th=[32113], 00:12:11.423 | 99.00th=[46400], 99.50th=[52691], 99.90th=[58983], 99.95th=[58983], 00:12:11.423 | 99.99th=[58983] 00:12:11.423 write: IOPS=2964, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1004msec); 0 zone resets 00:12:11.423 slat (usec): min=7, max=15461, avg=184.49, stdev=952.29 00:12:11.423 clat (usec): min=651, max=64146, avg=23210.34, stdev=9479.45 00:12:11.423 lat (usec): min=4232, max=64176, avg=23394.83, stdev=9559.66 00:12:11.423 clat percentiles (usec): 00:12:11.423 | 1.00th=[ 5211], 5.00th=[11731], 10.00th=[12911], 20.00th=[16909], 00:12:11.423 | 30.00th=[19792], 40.00th=[21103], 50.00th=[21890], 60.00th=[22676], 00:12:11.423 | 70.00th=[24249], 80.00th=[26870], 90.00th=[33162], 95.00th=[43779], 00:12:11.423 | 99.00th=[59507], 99.50th=[62129], 99.90th=[64226], 99.95th=[64226], 00:12:11.423 | 99.99th=[64226] 00:12:11.423 bw ( KiB/s): min=10496, max=12288, per=16.36%, avg=11392.00, stdev=1267.14, samples=2 00:12:11.423 iops : min= 2624, max= 3072, avg=2848.00, stdev=316.78, samples=2 00:12:11.423 lat (usec) : 750=0.02% 00:12:11.423 lat (msec) : 10=1.23%, 20=27.01%, 50=69.60%, 100=2.15% 00:12:11.423 cpu : usr=3.39%, sys=9.87%, ctx=256, majf=0, minf=17 00:12:11.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:11.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:11.424 issued rwts: total=2560,2976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:11.424 job1: (groupid=0, jobs=1): err= 0: pid=68022: Thu Jul 25 14:00:20 2024 00:12:11.424 read: IOPS=5671, BW=22.2MiB/s (23.2MB/s)(22.2MiB/1004msec) 00:12:11.424 slat (usec): min=3, max=5366, avg=79.00, stdev=447.18 00:12:11.424 clat (usec): min=849, max=18692, avg=11166.92, stdev=1401.61 00:12:11.424 lat (usec): min=4246, max=22049, avg=11245.92, stdev=1419.10 00:12:11.424 clat percentiles (usec): 00:12:11.424 | 1.00th=[ 5538], 5.00th=[ 8848], 10.00th=[10421], 20.00th=[10683], 00:12:11.424 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:12:11.424 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:12:11.424 | 99.00th=[17433], 99.50th=[17957], 99.90th=[18744], 99.95th=[18744], 00:12:11.424 | 99.99th=[18744] 00:12:11.424 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:12:11.424 slat (usec): min=6, max=6298, avg=81.46, stdev=392.64 00:12:11.424 clat (usec): min=5287, max=14934, avg=10338.53, stdev=1032.63 00:12:11.424 lat (usec): min=5733, max=14971, avg=10419.98, stdev=975.54 00:12:11.424 clat percentiles (usec): 00:12:11.424 | 1.00th=[ 7373], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9765], 00:12:11.424 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:12:11.424 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11994], 00:12:11.424 | 99.00th=[14615], 99.50th=[14877], 99.90th=[14877], 99.95th=[14877], 00:12:11.424 | 99.99th=[14877] 00:12:11.424 bw ( KiB/s): min=24048, max=24625, per=34.96%, avg=24336.50, stdev=408.00, samples=2 00:12:11.424 iops : min= 6012, max= 6156, avg=6084.00, stdev=101.82, samples=2 00:12:11.424 lat (usec) : 1000=0.01% 00:12:11.424 lat (msec) : 10=19.83%, 20=80.16% 00:12:11.424 cpu : usr=5.58%, sys=21.24%, ctx=257, majf=0, minf=13 00:12:11.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:11.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:11.424 issued rwts: total=5694,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:11.424 job2: (groupid=0, jobs=1): err= 0: pid=68023: Thu Jul 25 14:00:20 2024 00:12:11.424 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:12:11.424 slat (usec): min=8, max=2841, avg=90.54, stdev=377.45 00:12:11.424 clat (usec): min=9224, max=19060, avg=12455.28, stdev=1120.64 00:12:11.424 lat (usec): min=10876, max=19082, avg=12545.82, stdev=1061.73 00:12:11.424 clat percentiles (usec): 00:12:11.424 | 1.00th=[10028], 5.00th=[11469], 10.00th=[11600], 20.00th=[11863], 00:12:11.424 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:12:11.424 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13173], 95.00th=[13304], 00:12:11.424 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:12:11.424 | 99.99th=[19006] 00:12:11.424 write: IOPS=5270, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1002msec); 0 zone resets 00:12:11.424 slat (usec): min=18, max=7176, avg=91.91, stdev=329.83 00:12:11.424 clat (usec): min=138, max=14392, avg=11883.27, stdev=1075.34 00:12:11.424 lat (usec): min=2455, max=18321, avg=11975.18, stdev=1039.95 00:12:11.424 clat percentiles (usec): 00:12:11.424 | 1.00th=[ 6194], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:12:11.424 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:12:11.424 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12780], 95.00th=[12911], 00:12:11.424 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13829], 99.95th=[14222], 00:12:11.424 | 99.99th=[14353] 00:12:11.424 bw ( KiB/s): min=20216, max=21050, per=29.64%, avg=20633.00, stdev=589.73, samples=2 00:12:11.424 iops : min= 5054, max= 5262, avg=5158.00, stdev=147.08, samples=2 00:12:11.424 lat (usec) : 250=0.01% 00:12:11.424 lat (msec) : 4=0.31%, 10=1.65%, 20=98.03% 00:12:11.424 cpu : usr=6.19%, sys=20.58%, ctx=420, majf=0, minf=14 00:12:11.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:11.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:11.424 issued rwts: total=5120,5281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:11.424 job3: (groupid=0, jobs=1): err= 0: pid=68024: Thu Jul 25 14:00:20 2024 00:12:11.424 read: IOPS=2615, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1003msec) 00:12:11.424 slat (usec): min=16, max=16047, avg=186.63, stdev=946.03 00:12:11.424 clat (usec): min=989, max=49103, avg=21972.18, stdev=5114.99 00:12:11.424 lat (usec): min=11471, max=49130, avg=22158.81, stdev=5141.92 00:12:11.424 clat percentiles (usec): 00:12:11.424 | 1.00th=[13173], 5.00th=[15401], 10.00th=[16450], 20.00th=[17171], 00:12:11.424 | 30.00th=[20055], 40.00th=[21627], 50.00th=[22152], 60.00th=[22414], 00:12:11.424 | 70.00th=[22938], 80.00th=[23462], 90.00th=[28181], 95.00th=[32113], 00:12:11.424 | 99.00th=[40633], 99.50th=[47973], 99.90th=[49021], 99.95th=[49021], 00:12:11.424 | 99.99th=[49021] 00:12:11.424 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:12:11.424 slat (usec): min=22, max=9315, avg=157.45, stdev=771.66 00:12:11.424 clat (usec): min=8512, max=71489, avg=22424.28, stdev=9322.20 00:12:11.424 lat (usec): min=8561, max=71524, avg=22581.73, stdev=9388.23 00:12:11.424 clat percentiles (usec): 00:12:11.424 | 1.00th=[11076], 5.00th=[12387], 10.00th=[12518], 20.00th=[15664], 00:12:11.424 | 30.00th=[18220], 40.00th=[20579], 50.00th=[21365], 60.00th=[22152], 00:12:11.424 | 70.00th=[23200], 80.00th=[24773], 90.00th=[32900], 95.00th=[40633], 00:12:11.424 | 99.00th=[63701], 99.50th=[64750], 99.90th=[71828], 99.95th=[71828], 00:12:11.424 | 99.99th=[71828] 00:12:11.424 bw ( KiB/s): min=11768, max=12312, per=17.30%, avg=12040.00, stdev=384.67, samples=2 00:12:11.424 iops : min= 2942, max= 3078, avg=3010.00, stdev=96.17, samples=2 00:12:11.424 lat (usec) : 1000=0.02% 00:12:11.424 lat (msec) : 10=0.12%, 20=33.82%, 50=64.67%, 100=1.37% 00:12:11.424 cpu : usr=3.49%, sys=11.48%, ctx=266, majf=0, minf=5 00:12:11.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:11.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:11.424 issued rwts: total=2623,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:11.424 00:12:11.424 Run status group 0 (all jobs): 00:12:11.424 READ: bw=62.2MiB/s (65.3MB/s), 9.96MiB/s-22.2MiB/s (10.4MB/s-23.2MB/s), io=62.5MiB (65.5MB), run=1002-1004msec 00:12:11.424 WRITE: bw=68.0MiB/s (71.3MB/s), 11.6MiB/s-23.9MiB/s (12.1MB/s-25.1MB/s), io=68.3MiB (71.6MB), run=1002-1004msec 00:12:11.424 00:12:11.424 Disk stats (read/write): 00:12:11.424 nvme0n1: ios=2097/2343, merge=0/0, ticks=23181/27430, in_queue=50611, util=88.04% 00:12:11.424 nvme0n2: ios=4956/5120, merge=0/0, ticks=50893/46574, in_queue=97467, util=88.22% 00:12:11.424 nvme0n3: ios=4320/4608, merge=0/0, ticks=11444/10724, in_queue=22168, util=89.17% 00:12:11.424 nvme0n4: ios=2560/2623, merge=0/0, ticks=27687/21727, in_queue=49414, util=89.64% 00:12:11.424 14:00:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:11.424 14:00:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68038 00:12:11.424 14:00:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:11.424 14:00:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:11.424 [global] 00:12:11.424 thread=1 00:12:11.424 invalidate=1 00:12:11.424 rw=read 00:12:11.424 time_based=1 00:12:11.424 runtime=10 00:12:11.424 ioengine=libaio 00:12:11.424 direct=1 00:12:11.424 bs=4096 00:12:11.424 iodepth=1 00:12:11.424 norandommap=1 00:12:11.424 numjobs=1 00:12:11.424 00:12:11.424 [job0] 00:12:11.424 filename=/dev/nvme0n1 00:12:11.424 [job1] 00:12:11.424 filename=/dev/nvme0n2 00:12:11.424 [job2] 00:12:11.424 filename=/dev/nvme0n3 00:12:11.424 [job3] 00:12:11.424 filename=/dev/nvme0n4 00:12:11.424 Could not set queue depth (nvme0n1) 00:12:11.424 Could not set queue depth (nvme0n2) 00:12:11.424 Could not set queue depth (nvme0n3) 00:12:11.424 Could not set queue depth (nvme0n4) 00:12:11.683 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.683 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.683 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.683 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.683 fio-3.35 00:12:11.683 Starting 4 threads 00:12:14.217 14:00:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:14.476 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=44126208, buflen=4096 00:12:14.476 fio: pid=68085, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:14.476 14:00:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:14.739 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=75579392, buflen=4096 00:12:14.739 fio: pid=68084, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:14.739 14:00:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:14.739 14:00:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:14.998 fio: pid=68082, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:14.998 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=55508992, buflen=4096 00:12:14.998 14:00:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:14.998 14:00:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:15.258 fio: pid=68083, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:15.258 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=20709376, buflen=4096 00:12:15.258 00:12:15.258 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68082: Thu Jul 25 14:00:24 2024 00:12:15.258 read: IOPS=4105, BW=16.0MiB/s (16.8MB/s)(52.9MiB/3301msec) 00:12:15.258 slat (usec): min=5, max=13659, avg=11.50, stdev=185.05 00:12:15.258 clat (usec): min=102, max=1880, avg=231.17, stdev=55.77 00:12:15.258 lat (usec): min=112, max=13817, avg=242.66, stdev=191.95 00:12:15.258 clat percentiles (usec): 00:12:15.258 | 1.00th=[ 125], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 174], 00:12:15.258 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:12:15.258 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:12:15.258 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 519], 99.95th=[ 742], 00:12:15.258 | 99.99th=[ 1663] 00:12:15.258 bw ( KiB/s): min=15000, max=18208, per=21.83%, avg=15839.00, stdev=1187.87, samples=6 00:12:15.258 iops : min= 3750, max= 4552, avg=3959.67, stdev=297.01, samples=6 00:12:15.258 lat (usec) : 250=55.58%, 500=44.31%, 750=0.06% 00:12:15.258 lat (msec) : 2=0.04% 00:12:15.258 cpu : usr=0.58%, sys=3.21%, ctx=13559, majf=0, minf=1 00:12:15.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.258 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.258 issued rwts: total=13553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.258 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68083: Thu Jul 25 14:00:24 2024 00:12:15.258 read: IOPS=6056, BW=23.7MiB/s (24.8MB/s)(83.8MiB/3540msec) 00:12:15.258 slat (usec): min=5, max=14209, avg=10.46, stdev=168.38 00:12:15.258 clat (usec): min=86, max=170194, avg=153.87, stdev=1171.55 00:12:15.258 lat (usec): min=92, max=170207, avg=164.33, stdev=1183.68 00:12:15.258 clat percentiles (usec): 00:12:15.258 | 1.00th=[ 102], 5.00th=[ 117], 10.00th=[ 128], 20.00th=[ 135], 00:12:15.258 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 147], 00:12:15.258 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 169], 00:12:15.258 | 99.00th=[ 188], 99.50th=[ 198], 99.90th=[ 314], 99.95th=[ 725], 00:12:15.258 | 99.99th=[ 3752] 00:12:15.258 bw ( KiB/s): min=16613, max=26168, per=33.29%, avg=24159.83, stdev=3711.46, samples=6 00:12:15.258 iops : min= 4153, max= 6542, avg=6039.83, stdev=927.93, samples=6 00:12:15.258 lat (usec) : 100=0.72%, 250=99.14%, 500=0.07%, 750=0.02%, 1000=0.01% 00:12:15.258 lat (msec) : 2=0.02%, 4=0.01%, 50=0.01%, 250=0.01% 00:12:15.258 cpu : usr=0.73%, sys=4.63%, ctx=21451, majf=0, minf=1 00:12:15.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.258 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.258 issued rwts: total=21441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.258 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68084: Thu Jul 25 14:00:24 2024 00:12:15.258 read: IOPS=5975, BW=23.3MiB/s (24.5MB/s)(72.1MiB/3088msec) 00:12:15.258 slat (usec): min=5, max=12810, avg= 9.72, stdev=111.75 00:12:15.258 clat (usec): min=111, max=9245, avg=156.76, stdev=71.68 00:12:15.258 lat (usec): min=119, max=12985, avg=166.48, stdev=133.04 00:12:15.258 clat percentiles (usec): 00:12:15.258 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:12:15.258 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:12:15.258 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 180], 00:12:15.258 | 99.00th=[ 196], 99.50th=[ 204], 99.90th=[ 302], 99.95th=[ 445], 00:12:15.258 | 99.99th=[ 1680] 00:12:15.258 bw ( KiB/s): min=23728, max=24784, per=33.31%, avg=24172.60, stdev=399.89, samples=5 00:12:15.258 iops : min= 5932, max= 6196, avg=6043.00, stdev=99.93, samples=5 00:12:15.258 lat (usec) : 250=99.84%, 500=0.11%, 750=0.01%, 1000=0.01% 00:12:15.258 lat (msec) : 2=0.02%, 10=0.01% 00:12:15.258 cpu : usr=0.65%, sys=5.12%, ctx=18459, majf=0, minf=1 00:12:15.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.258 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.258 issued rwts: total=18453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.258 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68085: Thu Jul 25 14:00:24 2024 00:12:15.258 read: IOPS=3779, BW=14.8MiB/s (15.5MB/s)(42.1MiB/2851msec) 00:12:15.258 slat (nsec): min=6096, max=96270, avg=8621.75, stdev=3202.18 00:12:15.258 clat (usec): min=135, max=6838, avg=254.89, stdev=98.54 00:12:15.258 lat (usec): min=148, max=6847, avg=263.51, stdev=98.99 00:12:15.258 clat percentiles (usec): 00:12:15.258 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:12:15.258 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:12:15.258 | 70.00th=[ 262], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:12:15.258 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[ 465], 99.95th=[ 1139], 00:12:15.258 | 99.99th=[ 5800] 00:12:15.258 bw ( KiB/s): min=14706, max=15440, per=20.84%, avg=15120.60, stdev=276.06, samples=5 00:12:15.258 iops : min= 3676, max= 3860, avg=3780.00, stdev=69.18, samples=5 00:12:15.258 lat (usec) : 250=44.92%, 500=54.98%, 750=0.02%, 1000=0.01% 00:12:15.258 lat (msec) : 2=0.02%, 4=0.01%, 10=0.03% 00:12:15.258 cpu : usr=0.63%, sys=2.98%, ctx=10779, majf=0, minf=2 00:12:15.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.258 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.258 issued rwts: total=10774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.258 00:12:15.258 Run status group 0 (all jobs): 00:12:15.258 READ: bw=70.9MiB/s (74.3MB/s), 14.8MiB/s-23.7MiB/s (15.5MB/s-24.8MB/s), io=251MiB (263MB), run=2851-3540msec 00:12:15.258 00:12:15.258 Disk stats (read/write): 00:12:15.258 nvme0n1: ios=12488/0, merge=0/0, ticks=2991/0, in_queue=2991, util=95.35% 00:12:15.258 nvme0n2: ios=20187/0, merge=0/0, ticks=3189/0, in_queue=3189, util=95.54% 00:12:15.258 nvme0n3: ios=17332/0, merge=0/0, ticks=2739/0, in_queue=2739, util=96.75% 00:12:15.258 nvme0n4: ios=9945/0, merge=0/0, ticks=2540/0, in_queue=2540, util=96.22% 00:12:15.258 14:00:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.258 14:00:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:15.518 14:00:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.519 14:00:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:15.778 14:00:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.778 14:00:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:16.038 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:16.038 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:16.297 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:16.297 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:16.297 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:16.297 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 68038 00:12:16.297 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:16.297 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.559 nvmf hotplug test: fio failed as expected 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.559 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.819 rmmod nvme_tcp 00:12:16.819 rmmod nvme_fabrics 00:12:16.819 rmmod nvme_keyring 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 67662 ']' 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 67662 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 67662 ']' 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 67662 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67662 00:12:16.819 killing process with pid 67662 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67662' 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 67662 00:12:16.819 14:00:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 67662 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:17.078 00:12:17.078 real 0m18.578s 00:12:17.078 user 1m11.894s 00:12:17.078 sys 0m8.742s 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.078 ************************************ 00:12:17.078 END TEST nvmf_fio_target 00:12:17.078 ************************************ 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:17.078 ************************************ 00:12:17.078 START TEST nvmf_bdevio 00:12:17.078 ************************************ 00:12:17.078 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:17.338 * Looking for test storage... 00:12:17.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.338 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:17.339 Cannot find device "nvmf_tgt_br" 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:17.339 Cannot find device "nvmf_tgt_br2" 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:17.339 Cannot find device "nvmf_tgt_br" 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:17.339 Cannot find device "nvmf_tgt_br2" 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:17.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:17.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:17.339 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:17.598 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:17.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:12:17.598 00:12:17.599 --- 10.0.0.2 ping statistics --- 00:12:17.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.599 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:17.599 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:17.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:12:17.599 00:12:17.599 --- 10.0.0.3 ping statistics --- 00:12:17.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.599 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:17.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:12:17.599 00:12:17.599 --- 10.0.0.1 ping statistics --- 00:12:17.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.599 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68344 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68344 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 68344 ']' 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:17.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:17.599 14:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.858 [2024-07-25 14:00:26.903369] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:12:17.858 [2024-07-25 14:00:26.903437] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.858 [2024-07-25 14:00:27.044191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.858 [2024-07-25 14:00:27.146289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.858 [2024-07-25 14:00:27.146354] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.858 [2024-07-25 14:00:27.146360] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.858 [2024-07-25 14:00:27.146365] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.858 [2024-07-25 14:00:27.146369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.858 [2024-07-25 14:00:27.146504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:17.858 [2024-07-25 14:00:27.147446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:17.858 [2024-07-25 14:00:27.147483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:17.858 [2024-07-25 14:00:27.147490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.115 [2024-07-25 14:00:27.208831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:18.683 [2024-07-25 14:00:27.803345] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:18.683 Malloc0 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:18.683 [2024-07-25 14:00:27.872373] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:18.683 { 00:12:18.683 "params": { 00:12:18.683 "name": "Nvme$subsystem", 00:12:18.683 "trtype": "$TEST_TRANSPORT", 00:12:18.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:18.683 "adrfam": "ipv4", 00:12:18.683 "trsvcid": "$NVMF_PORT", 00:12:18.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:18.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:18.683 "hdgst": ${hdgst:-false}, 00:12:18.683 "ddgst": ${ddgst:-false} 00:12:18.683 }, 00:12:18.683 "method": "bdev_nvme_attach_controller" 00:12:18.683 } 00:12:18.683 EOF 00:12:18.683 )") 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:18.683 14:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:18.683 "params": { 00:12:18.683 "name": "Nvme1", 00:12:18.683 "trtype": "tcp", 00:12:18.683 "traddr": "10.0.0.2", 00:12:18.683 "adrfam": "ipv4", 00:12:18.683 "trsvcid": "4420", 00:12:18.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:18.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:18.683 "hdgst": false, 00:12:18.683 "ddgst": false 00:12:18.683 }, 00:12:18.683 "method": "bdev_nvme_attach_controller" 00:12:18.683 }' 00:12:18.683 [2024-07-25 14:00:27.928374] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:12:18.683 [2024-07-25 14:00:27.928435] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68380 ] 00:12:18.943 [2024-07-25 14:00:28.069521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:18.943 [2024-07-25 14:00:28.177480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.943 [2024-07-25 14:00:28.177541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.943 [2024-07-25 14:00:28.177542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.943 [2024-07-25 14:00:28.230514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:19.216 I/O targets: 00:12:19.216 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:19.216 00:12:19.216 00:12:19.216 CUnit - A unit testing framework for C - Version 2.1-3 00:12:19.216 http://cunit.sourceforge.net/ 00:12:19.216 00:12:19.216 00:12:19.216 Suite: bdevio tests on: Nvme1n1 00:12:19.216 Test: blockdev write read block ...passed 00:12:19.216 Test: blockdev write zeroes read block ...passed 00:12:19.216 Test: blockdev write zeroes read no split ...passed 00:12:19.216 Test: blockdev write zeroes read split ...passed 00:12:19.216 Test: blockdev write zeroes read split partial ...passed 00:12:19.216 Test: blockdev reset ...[2024-07-25 14:00:28.365863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:19.216 [2024-07-25 14:00:28.365950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc37c0 (9): Bad file descriptor 00:12:19.216 [2024-07-25 14:00:28.377751] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:19.216 passed 00:12:19.216 Test: blockdev write read 8 blocks ...passed 00:12:19.216 Test: blockdev write read size > 128k ...passed 00:12:19.216 Test: blockdev write read invalid size ...passed 00:12:19.216 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:19.216 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:19.216 Test: blockdev write read max offset ...passed 00:12:19.216 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:19.216 Test: blockdev writev readv 8 blocks ...passed 00:12:19.216 Test: blockdev writev readv 30 x 1block ...passed 00:12:19.216 Test: blockdev writev readv block ...passed 00:12:19.216 Test: blockdev writev readv size > 128k ...passed 00:12:19.216 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:19.216 Test: blockdev comparev and writev ...[2024-07-25 14:00:28.383759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:19.216 [2024-07-25 14:00:28.383836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:19.216 [2024-07-25 14:00:28.383905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:19.216 [2024-07-25 14:00:28.383960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:19.216 [2024-07-25 14:00:28.384252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:19.216 [2024-07-25 14:00:28.384328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:19.216 [2024-07-25 14:00:28.384401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:19.216 [2024-07-25 14:00:28.384454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:19.216 [2024-07-25 14:00:28.384732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:19.216 [2024-07-25 14:00:28.384786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:19.216 [2024-07-25 14:00:28.384862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:19.216 [2024-07-25 14:00:28.384924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:19.216 [2024-07-25 14:00:28.385214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:19.216 [2024-07-25 14:00:28.385268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:19.216 [2024-07-25 14:00:28.385339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:19.216 [2024-07-25 14:00:28.385402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:19.216 passed 00:12:19.216 Test: blockdev nvme passthru rw ...passed 00:12:19.216 Test: blockdev nvme passthru vendor specific ...[2024-07-25 14:00:28.386105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:19.216 [2024-07-25 14:00:28.386162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:19.216 [2024-07-25 14:00:28.386316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:19.216 [2024-07-25 14:00:28.386370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:19.216 [2024-07-25 14:00:28.386517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:19.216 [2024-07-25 14:00:28.386569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:19.216 [2024-07-25 14:00:28.386722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:19.216 [2024-07-25 14:00:28.386774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:19.216 passed 00:12:19.216 Test: blockdev nvme admin passthru ...passed 00:12:19.216 Test: blockdev copy ...passed 00:12:19.216 00:12:19.216 Run Summary: Type Total Ran Passed Failed Inactive 00:12:19.216 suites 1 1 n/a 0 0 00:12:19.216 tests 23 23 23 0 0 00:12:19.216 asserts 152 152 152 0 n/a 00:12:19.216 00:12:19.216 Elapsed time = 0.135 seconds 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.480 rmmod nvme_tcp 00:12:19.480 rmmod nvme_fabrics 00:12:19.480 rmmod nvme_keyring 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68344 ']' 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68344 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 68344 ']' 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 68344 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68344 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68344' 00:12:19.480 killing process with pid 68344 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 68344 00:12:19.480 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 68344 00:12:19.740 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.740 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.740 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.740 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.740 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.740 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.740 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.740 14:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.740 14:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:19.740 00:12:19.740 real 0m2.736s 00:12:19.740 user 0m8.456s 00:12:19.740 sys 0m0.789s 00:12:19.740 14:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.740 14:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:19.740 ************************************ 00:12:19.740 END TEST nvmf_bdevio 00:12:19.740 ************************************ 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:20.000 00:12:20.000 real 2m29.034s 00:12:20.000 user 6m41.277s 00:12:20.000 sys 0m46.959s 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:20.000 ************************************ 00:12:20.000 END TEST nvmf_target_core 00:12:20.000 ************************************ 00:12:20.000 14:00:29 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:20.000 14:00:29 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:20.000 14:00:29 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.000 14:00:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:20.000 ************************************ 00:12:20.000 START TEST nvmf_target_extra 00:12:20.000 ************************************ 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:20.000 * Looking for test storage... 00:12:20.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.000 ************************************ 00:12:20.000 START TEST nvmf_auth_target 00:12:20.000 ************************************ 00:12:20.000 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:20.260 * Looking for test storage... 00:12:20.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:20.260 Cannot find device "nvmf_tgt_br" 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:20.260 Cannot find device "nvmf_tgt_br2" 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:20.260 Cannot find device "nvmf_tgt_br" 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:20.260 Cannot find device "nvmf_tgt_br2" 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:12:20.260 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:20.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:20.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:20.520 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:20.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:12:20.521 00:12:20.521 --- 10.0.0.2 ping statistics --- 00:12:20.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.521 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:20.521 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:20.521 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:12:20.521 00:12:20.521 --- 10.0.0.3 ping statistics --- 00:12:20.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.521 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:20.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:12:20.521 00:12:20.521 --- 10.0.0.1 ping statistics --- 00:12:20.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.521 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.521 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=68604 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 68604 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68604 ']' 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.802 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=68636 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3b7af5eb70e9e03725155bc4da79b36a6bd71e445b85595c 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.EcS 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3b7af5eb70e9e03725155bc4da79b36a6bd71e445b85595c 0 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3b7af5eb70e9e03725155bc4da79b36a6bd71e445b85595c 0 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3b7af5eb70e9e03725155bc4da79b36a6bd71e445b85595c 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.EcS 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.EcS 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.EcS 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5ebf427b325230a7f66f6c73f82cb9e62ead175ba112ebcd227a843649d20621 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CZ9 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5ebf427b325230a7f66f6c73f82cb9e62ead175ba112ebcd227a843649d20621 3 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5ebf427b325230a7f66f6c73f82cb9e62ead175ba112ebcd227a843649d20621 3 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5ebf427b325230a7f66f6c73f82cb9e62ead175ba112ebcd227a843649d20621 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CZ9 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CZ9 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.CZ9 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:21.779 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=45bb24447ae43d68d80132275dbe4164 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ohd 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 45bb24447ae43d68d80132275dbe4164 1 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 45bb24447ae43d68d80132275dbe4164 1 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=45bb24447ae43d68d80132275dbe4164 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ohd 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ohd 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Ohd 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:21.779 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:21.780 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:21.780 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:21.780 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:21.780 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:21.780 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7752d3f62c44b2020bcd9f77d4eb36a426c396696baafa6c 00:12:21.780 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4tM 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7752d3f62c44b2020bcd9f77d4eb36a426c396696baafa6c 2 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7752d3f62c44b2020bcd9f77d4eb36a426c396696baafa6c 2 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7752d3f62c44b2020bcd9f77d4eb36a426c396696baafa6c 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4tM 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4tM 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.4tM 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c2444dc9017bfa25356759834370f75fb4034fb8d94a1c8f 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DIJ 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c2444dc9017bfa25356759834370f75fb4034fb8d94a1c8f 2 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c2444dc9017bfa25356759834370f75fb4034fb8d94a1c8f 2 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c2444dc9017bfa25356759834370f75fb4034fb8d94a1c8f 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DIJ 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DIJ 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.DIJ 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=89f06b83a1f9b6b22c193d6d1321f80f 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fRJ 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 89f06b83a1f9b6b22c193d6d1321f80f 1 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 89f06b83a1f9b6b22c193d6d1321f80f 1 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=89f06b83a1f9b6b22c193d6d1321f80f 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fRJ 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fRJ 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.fRJ 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ff5f7bb18996095ec805502a7edac3b44bff6796793ecc6db66591eb198a04c0 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Gov 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ff5f7bb18996095ec805502a7edac3b44bff6796793ecc6db66591eb198a04c0 3 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ff5f7bb18996095ec805502a7edac3b44bff6796793ecc6db66591eb198a04c0 3 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ff5f7bb18996095ec805502a7edac3b44bff6796793ecc6db66591eb198a04c0 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:22.040 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Gov 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Gov 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Gov 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 68604 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68604 ']' 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:22.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 68636 /var/tmp/host.sock 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68636 ']' 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:22.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:22.299 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.EcS 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.EcS 00:12:22.585 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.EcS 00:12:22.842 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.CZ9 ]] 00:12:22.842 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CZ9 00:12:22.842 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.842 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.842 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.842 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CZ9 00:12:22.842 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CZ9 00:12:23.100 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:23.100 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Ohd 00:12:23.100 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.100 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.100 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.100 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Ohd 00:12:23.100 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Ohd 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.4tM ]] 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4tM 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4tM 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4tM 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DIJ 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.359 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.DIJ 00:12:23.360 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.DIJ 00:12:23.619 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.fRJ ]] 00:12:23.619 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fRJ 00:12:23.619 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.619 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.619 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.619 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fRJ 00:12:23.619 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fRJ 00:12:23.877 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:23.877 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Gov 00:12:23.877 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.877 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.877 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.877 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Gov 00:12:23.877 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Gov 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.137 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.397 00:12:24.397 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.397 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.397 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.656 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.656 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.656 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.656 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.656 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.656 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.656 { 00:12:24.656 "cntlid": 1, 00:12:24.656 "qid": 0, 00:12:24.656 "state": "enabled", 00:12:24.656 "thread": "nvmf_tgt_poll_group_000", 00:12:24.656 "listen_address": { 00:12:24.656 "trtype": "TCP", 00:12:24.656 "adrfam": "IPv4", 00:12:24.656 "traddr": "10.0.0.2", 00:12:24.656 "trsvcid": "4420" 00:12:24.656 }, 00:12:24.656 "peer_address": { 00:12:24.656 "trtype": "TCP", 00:12:24.656 "adrfam": "IPv4", 00:12:24.656 "traddr": "10.0.0.1", 00:12:24.656 "trsvcid": "36684" 00:12:24.656 }, 00:12:24.656 "auth": { 00:12:24.656 "state": "completed", 00:12:24.656 "digest": "sha256", 00:12:24.656 "dhgroup": "null" 00:12:24.656 } 00:12:24.656 } 00:12:24.656 ]' 00:12:24.656 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.656 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.656 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.937 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:24.937 14:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.937 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.937 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.937 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.937 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.128 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.387 00:12:29.387 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.387 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.387 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.655 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.655 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.655 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.655 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.655 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.655 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.655 { 00:12:29.655 "cntlid": 3, 00:12:29.655 "qid": 0, 00:12:29.655 "state": "enabled", 00:12:29.655 "thread": "nvmf_tgt_poll_group_000", 00:12:29.655 "listen_address": { 00:12:29.655 "trtype": "TCP", 00:12:29.655 "adrfam": "IPv4", 00:12:29.655 "traddr": "10.0.0.2", 00:12:29.655 "trsvcid": "4420" 00:12:29.655 }, 00:12:29.655 "peer_address": { 00:12:29.655 "trtype": "TCP", 00:12:29.655 "adrfam": "IPv4", 00:12:29.655 "traddr": "10.0.0.1", 00:12:29.655 "trsvcid": "36708" 00:12:29.655 }, 00:12:29.655 "auth": { 00:12:29.655 "state": "completed", 00:12:29.655 "digest": "sha256", 00:12:29.655 "dhgroup": "null" 00:12:29.655 } 00:12:29.655 } 00:12:29.655 ]' 00:12:29.655 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.655 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:29.655 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.917 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:29.917 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.917 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.917 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.917 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.176 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:12:30.771 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.771 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:30.771 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.771 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.771 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.771 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.771 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:30.771 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:31.059 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:12:31.059 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.059 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:31.059 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:31.059 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:31.059 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.059 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.059 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.059 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.059 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.059 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.059 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.319 00:12:31.319 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.319 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.319 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.579 { 00:12:31.579 "cntlid": 5, 00:12:31.579 "qid": 0, 00:12:31.579 "state": "enabled", 00:12:31.579 "thread": "nvmf_tgt_poll_group_000", 00:12:31.579 "listen_address": { 00:12:31.579 "trtype": "TCP", 00:12:31.579 "adrfam": "IPv4", 00:12:31.579 "traddr": "10.0.0.2", 00:12:31.579 "trsvcid": "4420" 00:12:31.579 }, 00:12:31.579 "peer_address": { 00:12:31.579 "trtype": "TCP", 00:12:31.579 "adrfam": "IPv4", 00:12:31.579 "traddr": "10.0.0.1", 00:12:31.579 "trsvcid": "36736" 00:12:31.579 }, 00:12:31.579 "auth": { 00:12:31.579 "state": "completed", 00:12:31.579 "digest": "sha256", 00:12:31.579 "dhgroup": "null" 00:12:31.579 } 00:12:31.579 } 00:12:31.579 ]' 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.579 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.837 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:12:32.405 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.405 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:32.405 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.405 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.405 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.405 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.405 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:32.405 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:32.664 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:12:32.664 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.664 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:32.664 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:32.664 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:32.664 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.664 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:12:32.664 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.664 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.664 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.664 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.664 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.922 00:12:32.922 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.922 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.922 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.182 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.182 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.182 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.182 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.182 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.182 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.182 { 00:12:33.182 "cntlid": 7, 00:12:33.182 "qid": 0, 00:12:33.182 "state": "enabled", 00:12:33.182 "thread": "nvmf_tgt_poll_group_000", 00:12:33.182 "listen_address": { 00:12:33.182 "trtype": "TCP", 00:12:33.182 "adrfam": "IPv4", 00:12:33.182 "traddr": "10.0.0.2", 00:12:33.182 "trsvcid": "4420" 00:12:33.182 }, 00:12:33.182 "peer_address": { 00:12:33.182 "trtype": "TCP", 00:12:33.182 "adrfam": "IPv4", 00:12:33.182 "traddr": "10.0.0.1", 00:12:33.182 "trsvcid": "36776" 00:12:33.182 }, 00:12:33.182 "auth": { 00:12:33.182 "state": "completed", 00:12:33.182 "digest": "sha256", 00:12:33.182 "dhgroup": "null" 00:12:33.182 } 00:12:33.182 } 00:12:33.182 ]' 00:12:33.182 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.182 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:33.182 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.182 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:33.182 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.441 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.441 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.441 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.441 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.380 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.638 00:12:34.638 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.638 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.638 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.897 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.897 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.897 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.897 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.897 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.897 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.897 { 00:12:34.897 "cntlid": 9, 00:12:34.897 "qid": 0, 00:12:34.897 "state": "enabled", 00:12:34.897 "thread": "nvmf_tgt_poll_group_000", 00:12:34.897 "listen_address": { 00:12:34.897 "trtype": "TCP", 00:12:34.897 "adrfam": "IPv4", 00:12:34.897 "traddr": "10.0.0.2", 00:12:34.897 "trsvcid": "4420" 00:12:34.897 }, 00:12:34.897 "peer_address": { 00:12:34.897 "trtype": "TCP", 00:12:34.897 "adrfam": "IPv4", 00:12:34.897 "traddr": "10.0.0.1", 00:12:34.897 "trsvcid": "50148" 00:12:34.897 }, 00:12:34.897 "auth": { 00:12:34.897 "state": "completed", 00:12:34.897 "digest": "sha256", 00:12:34.897 "dhgroup": "ffdhe2048" 00:12:34.897 } 00:12:34.897 } 00:12:34.897 ]' 00:12:34.897 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.897 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:34.897 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.156 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:35.156 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.156 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.156 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.156 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.415 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:12:35.981 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.981 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:35.981 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.981 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.981 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.981 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.981 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:35.981 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:36.240 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:12:36.240 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.240 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:36.240 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:36.240 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:36.240 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.240 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.240 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.240 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.240 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.240 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.240 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.499 00:12:36.499 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.499 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.499 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.757 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.757 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.757 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.757 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.757 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.757 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.757 { 00:12:36.757 "cntlid": 11, 00:12:36.757 "qid": 0, 00:12:36.757 "state": "enabled", 00:12:36.757 "thread": "nvmf_tgt_poll_group_000", 00:12:36.757 "listen_address": { 00:12:36.757 "trtype": "TCP", 00:12:36.757 "adrfam": "IPv4", 00:12:36.757 "traddr": "10.0.0.2", 00:12:36.757 "trsvcid": "4420" 00:12:36.757 }, 00:12:36.757 "peer_address": { 00:12:36.757 "trtype": "TCP", 00:12:36.757 "adrfam": "IPv4", 00:12:36.757 "traddr": "10.0.0.1", 00:12:36.757 "trsvcid": "50176" 00:12:36.757 }, 00:12:36.757 "auth": { 00:12:36.757 "state": "completed", 00:12:36.757 "digest": "sha256", 00:12:36.757 "dhgroup": "ffdhe2048" 00:12:36.757 } 00:12:36.757 } 00:12:36.757 ]' 00:12:36.757 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.757 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:36.757 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.757 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:36.757 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.757 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.757 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.757 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.015 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:12:37.583 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.583 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:37.583 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.583 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.583 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.583 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:37.583 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:37.583 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:37.841 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:12:37.841 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.841 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:37.841 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:37.841 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:37.841 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.841 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.841 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.841 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.841 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.841 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.841 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.099 00:12:38.099 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.099 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.099 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.357 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.357 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.357 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.357 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.357 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.357 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.357 { 00:12:38.357 "cntlid": 13, 00:12:38.357 "qid": 0, 00:12:38.357 "state": "enabled", 00:12:38.357 "thread": "nvmf_tgt_poll_group_000", 00:12:38.357 "listen_address": { 00:12:38.357 "trtype": "TCP", 00:12:38.357 "adrfam": "IPv4", 00:12:38.357 "traddr": "10.0.0.2", 00:12:38.357 "trsvcid": "4420" 00:12:38.357 }, 00:12:38.357 "peer_address": { 00:12:38.357 "trtype": "TCP", 00:12:38.357 "adrfam": "IPv4", 00:12:38.357 "traddr": "10.0.0.1", 00:12:38.357 "trsvcid": "50204" 00:12:38.357 }, 00:12:38.357 "auth": { 00:12:38.357 "state": "completed", 00:12:38.357 "digest": "sha256", 00:12:38.357 "dhgroup": "ffdhe2048" 00:12:38.357 } 00:12:38.357 } 00:12:38.357 ]' 00:12:38.357 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.357 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:38.357 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.357 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:38.357 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.616 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.616 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.616 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.616 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:12:39.185 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.185 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:39.185 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.185 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.185 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.185 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.185 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:39.185 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:39.445 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:12:39.445 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.445 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:39.445 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:39.445 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:39.445 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.445 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:12:39.445 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.445 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.445 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.445 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:39.445 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:39.706 00:12:39.706 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.706 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.706 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.996 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.996 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.996 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.996 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.996 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.996 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.996 { 00:12:39.996 "cntlid": 15, 00:12:39.996 "qid": 0, 00:12:39.996 "state": "enabled", 00:12:39.996 "thread": "nvmf_tgt_poll_group_000", 00:12:39.996 "listen_address": { 00:12:39.996 "trtype": "TCP", 00:12:39.996 "adrfam": "IPv4", 00:12:39.996 "traddr": "10.0.0.2", 00:12:39.996 "trsvcid": "4420" 00:12:39.996 }, 00:12:39.996 "peer_address": { 00:12:39.996 "trtype": "TCP", 00:12:39.996 "adrfam": "IPv4", 00:12:39.996 "traddr": "10.0.0.1", 00:12:39.996 "trsvcid": "50240" 00:12:39.996 }, 00:12:39.996 "auth": { 00:12:39.996 "state": "completed", 00:12:39.996 "digest": "sha256", 00:12:39.996 "dhgroup": "ffdhe2048" 00:12:39.996 } 00:12:39.996 } 00:12:39.996 ]' 00:12:39.996 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.996 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:39.996 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.996 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:39.996 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.266 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.266 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.266 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.266 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.208 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.467 00:12:41.725 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.726 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.726 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.726 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.726 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.726 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.726 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.726 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.726 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.726 { 00:12:41.726 "cntlid": 17, 00:12:41.726 "qid": 0, 00:12:41.726 "state": "enabled", 00:12:41.726 "thread": "nvmf_tgt_poll_group_000", 00:12:41.726 "listen_address": { 00:12:41.726 "trtype": "TCP", 00:12:41.726 "adrfam": "IPv4", 00:12:41.726 "traddr": "10.0.0.2", 00:12:41.726 "trsvcid": "4420" 00:12:41.726 }, 00:12:41.726 "peer_address": { 00:12:41.726 "trtype": "TCP", 00:12:41.726 "adrfam": "IPv4", 00:12:41.726 "traddr": "10.0.0.1", 00:12:41.726 "trsvcid": "50260" 00:12:41.726 }, 00:12:41.726 "auth": { 00:12:41.726 "state": "completed", 00:12:41.726 "digest": "sha256", 00:12:41.726 "dhgroup": "ffdhe3072" 00:12:41.726 } 00:12:41.726 } 00:12:41.726 ]' 00:12:41.726 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.983 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:41.983 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.983 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:41.983 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.983 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.984 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.984 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.242 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:12:42.809 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.809 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:42.809 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.809 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.809 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.809 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.809 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:42.809 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:43.069 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:12:43.069 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.069 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:43.069 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:43.069 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:43.069 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.069 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.069 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.069 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.069 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.069 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.069 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.327 00:12:43.327 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.327 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.328 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.587 { 00:12:43.587 "cntlid": 19, 00:12:43.587 "qid": 0, 00:12:43.587 "state": "enabled", 00:12:43.587 "thread": "nvmf_tgt_poll_group_000", 00:12:43.587 "listen_address": { 00:12:43.587 "trtype": "TCP", 00:12:43.587 "adrfam": "IPv4", 00:12:43.587 "traddr": "10.0.0.2", 00:12:43.587 "trsvcid": "4420" 00:12:43.587 }, 00:12:43.587 "peer_address": { 00:12:43.587 "trtype": "TCP", 00:12:43.587 "adrfam": "IPv4", 00:12:43.587 "traddr": "10.0.0.1", 00:12:43.587 "trsvcid": "50270" 00:12:43.587 }, 00:12:43.587 "auth": { 00:12:43.587 "state": "completed", 00:12:43.587 "digest": "sha256", 00:12:43.587 "dhgroup": "ffdhe3072" 00:12:43.587 } 00:12:43.587 } 00:12:43.587 ]' 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.587 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.846 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:12:44.444 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.444 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:44.444 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.444 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.444 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.444 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.444 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:44.444 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:44.703 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:12:44.703 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.703 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:44.703 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:44.703 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:44.703 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.703 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.703 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.703 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.703 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.703 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.703 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.962 00:12:44.962 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.962 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.962 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.221 { 00:12:45.221 "cntlid": 21, 00:12:45.221 "qid": 0, 00:12:45.221 "state": "enabled", 00:12:45.221 "thread": "nvmf_tgt_poll_group_000", 00:12:45.221 "listen_address": { 00:12:45.221 "trtype": "TCP", 00:12:45.221 "adrfam": "IPv4", 00:12:45.221 "traddr": "10.0.0.2", 00:12:45.221 "trsvcid": "4420" 00:12:45.221 }, 00:12:45.221 "peer_address": { 00:12:45.221 "trtype": "TCP", 00:12:45.221 "adrfam": "IPv4", 00:12:45.221 "traddr": "10.0.0.1", 00:12:45.221 "trsvcid": "34434" 00:12:45.221 }, 00:12:45.221 "auth": { 00:12:45.221 "state": "completed", 00:12:45.221 "digest": "sha256", 00:12:45.221 "dhgroup": "ffdhe3072" 00:12:45.221 } 00:12:45.221 } 00:12:45.221 ]' 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.221 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.480 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:12:46.050 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.050 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:46.050 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.050 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.050 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.050 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.050 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:46.050 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:46.309 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:12:46.309 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.309 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:46.309 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:46.309 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:46.309 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.309 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:12:46.309 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.309 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.309 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.309 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.309 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.569 00:12:46.569 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:46.569 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:46.569 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.828 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.828 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.828 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.829 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.829 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.829 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.829 { 00:12:46.829 "cntlid": 23, 00:12:46.829 "qid": 0, 00:12:46.829 "state": "enabled", 00:12:46.829 "thread": "nvmf_tgt_poll_group_000", 00:12:46.829 "listen_address": { 00:12:46.829 "trtype": "TCP", 00:12:46.829 "adrfam": "IPv4", 00:12:46.829 "traddr": "10.0.0.2", 00:12:46.829 "trsvcid": "4420" 00:12:46.829 }, 00:12:46.829 "peer_address": { 00:12:46.829 "trtype": "TCP", 00:12:46.829 "adrfam": "IPv4", 00:12:46.829 "traddr": "10.0.0.1", 00:12:46.829 "trsvcid": "34464" 00:12:46.829 }, 00:12:46.829 "auth": { 00:12:46.829 "state": "completed", 00:12:46.829 "digest": "sha256", 00:12:46.829 "dhgroup": "ffdhe3072" 00:12:46.829 } 00:12:46.829 } 00:12:46.829 ]' 00:12:46.829 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.829 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:46.829 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.088 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:47.088 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.088 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.088 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.088 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.088 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:12:47.657 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.657 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:47.657 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.657 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.657 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.657 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:47.657 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.657 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:47.657 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:47.917 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:12:47.917 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.917 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:47.917 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:47.917 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:47.917 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.917 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.917 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.917 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.917 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.917 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.917 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.177 00:12:48.177 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.177 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.177 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.436 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.436 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.436 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.436 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.436 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.436 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.436 { 00:12:48.436 "cntlid": 25, 00:12:48.436 "qid": 0, 00:12:48.436 "state": "enabled", 00:12:48.436 "thread": "nvmf_tgt_poll_group_000", 00:12:48.436 "listen_address": { 00:12:48.436 "trtype": "TCP", 00:12:48.436 "adrfam": "IPv4", 00:12:48.436 "traddr": "10.0.0.2", 00:12:48.436 "trsvcid": "4420" 00:12:48.436 }, 00:12:48.436 "peer_address": { 00:12:48.436 "trtype": "TCP", 00:12:48.436 "adrfam": "IPv4", 00:12:48.436 "traddr": "10.0.0.1", 00:12:48.436 "trsvcid": "34474" 00:12:48.436 }, 00:12:48.436 "auth": { 00:12:48.436 "state": "completed", 00:12:48.436 "digest": "sha256", 00:12:48.436 "dhgroup": "ffdhe4096" 00:12:48.436 } 00:12:48.436 } 00:12:48.436 ]' 00:12:48.436 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.436 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.436 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.695 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:48.695 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.695 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.695 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.695 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.955 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:12:49.525 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.525 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:49.525 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.525 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.525 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.525 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.525 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:49.525 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:49.788 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:12:49.788 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.788 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:49.788 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:49.788 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:49.788 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.788 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.788 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.788 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.788 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.788 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.788 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.052 00:12:50.052 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.052 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.052 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.052 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.052 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.052 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.052 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.319 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.319 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.319 { 00:12:50.319 "cntlid": 27, 00:12:50.319 "qid": 0, 00:12:50.319 "state": "enabled", 00:12:50.319 "thread": "nvmf_tgt_poll_group_000", 00:12:50.319 "listen_address": { 00:12:50.319 "trtype": "TCP", 00:12:50.319 "adrfam": "IPv4", 00:12:50.319 "traddr": "10.0.0.2", 00:12:50.319 "trsvcid": "4420" 00:12:50.319 }, 00:12:50.319 "peer_address": { 00:12:50.319 "trtype": "TCP", 00:12:50.319 "adrfam": "IPv4", 00:12:50.319 "traddr": "10.0.0.1", 00:12:50.319 "trsvcid": "34494" 00:12:50.319 }, 00:12:50.319 "auth": { 00:12:50.319 "state": "completed", 00:12:50.319 "digest": "sha256", 00:12:50.319 "dhgroup": "ffdhe4096" 00:12:50.319 } 00:12:50.319 } 00:12:50.319 ]' 00:12:50.319 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.319 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:50.319 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.319 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:50.319 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.319 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.319 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.319 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.585 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:12:51.174 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.174 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:51.174 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.174 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.174 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.175 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.175 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:51.175 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:51.449 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:12:51.449 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.449 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:51.449 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:51.449 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:51.449 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.449 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.449 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.449 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.449 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.449 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.449 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.711 00:12:51.711 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:51.711 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.711 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.971 { 00:12:51.971 "cntlid": 29, 00:12:51.971 "qid": 0, 00:12:51.971 "state": "enabled", 00:12:51.971 "thread": "nvmf_tgt_poll_group_000", 00:12:51.971 "listen_address": { 00:12:51.971 "trtype": "TCP", 00:12:51.971 "adrfam": "IPv4", 00:12:51.971 "traddr": "10.0.0.2", 00:12:51.971 "trsvcid": "4420" 00:12:51.971 }, 00:12:51.971 "peer_address": { 00:12:51.971 "trtype": "TCP", 00:12:51.971 "adrfam": "IPv4", 00:12:51.971 "traddr": "10.0.0.1", 00:12:51.971 "trsvcid": "34520" 00:12:51.971 }, 00:12:51.971 "auth": { 00:12:51.971 "state": "completed", 00:12:51.971 "digest": "sha256", 00:12:51.971 "dhgroup": "ffdhe4096" 00:12:51.971 } 00:12:51.971 } 00:12:51.971 ]' 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.971 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.231 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:12:52.800 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.800 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:52.800 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.800 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.800 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.800 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.800 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:52.800 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:53.060 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:12:53.060 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.060 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:53.060 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:53.060 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:53.060 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.060 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:12:53.060 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.060 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.060 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.060 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.060 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.628 00:12:53.628 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.628 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.628 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.628 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.628 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.628 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.628 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.628 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.628 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.628 { 00:12:53.628 "cntlid": 31, 00:12:53.628 "qid": 0, 00:12:53.628 "state": "enabled", 00:12:53.628 "thread": "nvmf_tgt_poll_group_000", 00:12:53.628 "listen_address": { 00:12:53.628 "trtype": "TCP", 00:12:53.628 "adrfam": "IPv4", 00:12:53.628 "traddr": "10.0.0.2", 00:12:53.628 "trsvcid": "4420" 00:12:53.628 }, 00:12:53.628 "peer_address": { 00:12:53.628 "trtype": "TCP", 00:12:53.628 "adrfam": "IPv4", 00:12:53.628 "traddr": "10.0.0.1", 00:12:53.628 "trsvcid": "35720" 00:12:53.628 }, 00:12:53.628 "auth": { 00:12:53.628 "state": "completed", 00:12:53.628 "digest": "sha256", 00:12:53.628 "dhgroup": "ffdhe4096" 00:12:53.628 } 00:12:53.628 } 00:12:53.628 ]' 00:12:53.628 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.628 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.628 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.888 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:53.888 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.888 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.888 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.888 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.148 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:12:54.717 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.718 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:54.718 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.718 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.718 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.718 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:54.718 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.718 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:54.718 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:54.978 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:12:54.978 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.978 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:54.978 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:54.978 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:54.978 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.978 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.978 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.978 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.978 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.978 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.978 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.549 00:12:55.549 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.549 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.549 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.549 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.549 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.549 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.549 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.549 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.549 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.549 { 00:12:55.549 "cntlid": 33, 00:12:55.549 "qid": 0, 00:12:55.549 "state": "enabled", 00:12:55.549 "thread": "nvmf_tgt_poll_group_000", 00:12:55.549 "listen_address": { 00:12:55.549 "trtype": "TCP", 00:12:55.549 "adrfam": "IPv4", 00:12:55.549 "traddr": "10.0.0.2", 00:12:55.549 "trsvcid": "4420" 00:12:55.549 }, 00:12:55.549 "peer_address": { 00:12:55.549 "trtype": "TCP", 00:12:55.549 "adrfam": "IPv4", 00:12:55.549 "traddr": "10.0.0.1", 00:12:55.549 "trsvcid": "35762" 00:12:55.549 }, 00:12:55.549 "auth": { 00:12:55.549 "state": "completed", 00:12:55.549 "digest": "sha256", 00:12:55.549 "dhgroup": "ffdhe6144" 00:12:55.549 } 00:12:55.549 } 00:12:55.549 ]' 00:12:55.549 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.809 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.809 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.809 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:55.809 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.809 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.809 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.809 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.068 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:12:56.637 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.637 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:56.637 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.637 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.637 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.637 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.637 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:56.637 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:56.906 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:12:56.906 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.906 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:56.906 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:56.906 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:56.906 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.906 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.906 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.906 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.906 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.906 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.906 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.184 00:12:57.184 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.184 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:57.184 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.444 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.444 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.444 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.444 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.444 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.444 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:57.444 { 00:12:57.444 "cntlid": 35, 00:12:57.444 "qid": 0, 00:12:57.444 "state": "enabled", 00:12:57.444 "thread": "nvmf_tgt_poll_group_000", 00:12:57.444 "listen_address": { 00:12:57.444 "trtype": "TCP", 00:12:57.444 "adrfam": "IPv4", 00:12:57.444 "traddr": "10.0.0.2", 00:12:57.444 "trsvcid": "4420" 00:12:57.444 }, 00:12:57.444 "peer_address": { 00:12:57.444 "trtype": "TCP", 00:12:57.444 "adrfam": "IPv4", 00:12:57.444 "traddr": "10.0.0.1", 00:12:57.444 "trsvcid": "35788" 00:12:57.444 }, 00:12:57.444 "auth": { 00:12:57.444 "state": "completed", 00:12:57.444 "digest": "sha256", 00:12:57.444 "dhgroup": "ffdhe6144" 00:12:57.444 } 00:12:57.444 } 00:12:57.444 ]' 00:12:57.444 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.444 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:57.444 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:57.704 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:57.704 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:57.704 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.704 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.704 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.964 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:12:58.533 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.534 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:12:58.534 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.534 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.534 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.534 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.534 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:58.534 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:58.793 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:12:58.793 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.793 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:58.793 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:58.793 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:58.793 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.793 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.793 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.793 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.793 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.793 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.793 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.053 00:12:59.053 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:59.053 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.053 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:59.312 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.312 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.312 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.312 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.312 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.312 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.312 { 00:12:59.312 "cntlid": 37, 00:12:59.312 "qid": 0, 00:12:59.312 "state": "enabled", 00:12:59.312 "thread": "nvmf_tgt_poll_group_000", 00:12:59.312 "listen_address": { 00:12:59.312 "trtype": "TCP", 00:12:59.312 "adrfam": "IPv4", 00:12:59.312 "traddr": "10.0.0.2", 00:12:59.312 "trsvcid": "4420" 00:12:59.312 }, 00:12:59.312 "peer_address": { 00:12:59.312 "trtype": "TCP", 00:12:59.312 "adrfam": "IPv4", 00:12:59.312 "traddr": "10.0.0.1", 00:12:59.312 "trsvcid": "35808" 00:12:59.312 }, 00:12:59.312 "auth": { 00:12:59.312 "state": "completed", 00:12:59.312 "digest": "sha256", 00:12:59.312 "dhgroup": "ffdhe6144" 00:12:59.312 } 00:12:59.312 } 00:12:59.312 ]' 00:12:59.312 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.571 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.571 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.571 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:59.571 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.571 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.571 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.571 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.831 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:13:00.400 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.400 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:00.400 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.400 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.400 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.400 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.400 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:00.400 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:00.659 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:13:00.659 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.659 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:00.659 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:00.659 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:00.659 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.659 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:13:00.659 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.659 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.659 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.659 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.659 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.917 00:13:01.175 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.175 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.175 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.433 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.434 { 00:13:01.434 "cntlid": 39, 00:13:01.434 "qid": 0, 00:13:01.434 "state": "enabled", 00:13:01.434 "thread": "nvmf_tgt_poll_group_000", 00:13:01.434 "listen_address": { 00:13:01.434 "trtype": "TCP", 00:13:01.434 "adrfam": "IPv4", 00:13:01.434 "traddr": "10.0.0.2", 00:13:01.434 "trsvcid": "4420" 00:13:01.434 }, 00:13:01.434 "peer_address": { 00:13:01.434 "trtype": "TCP", 00:13:01.434 "adrfam": "IPv4", 00:13:01.434 "traddr": "10.0.0.1", 00:13:01.434 "trsvcid": "35844" 00:13:01.434 }, 00:13:01.434 "auth": { 00:13:01.434 "state": "completed", 00:13:01.434 "digest": "sha256", 00:13:01.434 "dhgroup": "ffdhe6144" 00:13:01.434 } 00:13:01.434 } 00:13:01.434 ]' 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.434 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.693 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:13:02.269 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.269 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:02.269 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.269 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.269 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.269 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:02.269 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.269 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:02.269 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:02.540 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:13:02.540 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.540 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:02.540 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:02.540 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:02.540 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.540 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.540 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.540 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.540 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.540 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.540 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.107 00:13:03.107 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.107 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.107 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.367 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.367 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.367 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.367 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.367 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.367 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.367 { 00:13:03.367 "cntlid": 41, 00:13:03.367 "qid": 0, 00:13:03.367 "state": "enabled", 00:13:03.367 "thread": "nvmf_tgt_poll_group_000", 00:13:03.367 "listen_address": { 00:13:03.367 "trtype": "TCP", 00:13:03.367 "adrfam": "IPv4", 00:13:03.367 "traddr": "10.0.0.2", 00:13:03.367 "trsvcid": "4420" 00:13:03.367 }, 00:13:03.367 "peer_address": { 00:13:03.367 "trtype": "TCP", 00:13:03.367 "adrfam": "IPv4", 00:13:03.367 "traddr": "10.0.0.1", 00:13:03.367 "trsvcid": "35890" 00:13:03.367 }, 00:13:03.367 "auth": { 00:13:03.367 "state": "completed", 00:13:03.367 "digest": "sha256", 00:13:03.367 "dhgroup": "ffdhe8192" 00:13:03.367 } 00:13:03.367 } 00:13:03.367 ]' 00:13:03.367 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.367 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:03.367 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.626 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:03.626 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.626 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.626 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.626 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.885 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:13:04.455 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.455 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:04.455 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.455 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.455 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.455 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.455 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:04.455 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:04.714 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:13:04.714 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.714 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:04.714 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:04.714 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:04.714 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.714 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.714 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.714 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.714 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.714 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.714 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.284 00:13:05.284 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.284 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.284 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.544 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.544 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.544 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.544 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.544 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.544 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.544 { 00:13:05.544 "cntlid": 43, 00:13:05.544 "qid": 0, 00:13:05.544 "state": "enabled", 00:13:05.544 "thread": "nvmf_tgt_poll_group_000", 00:13:05.544 "listen_address": { 00:13:05.544 "trtype": "TCP", 00:13:05.544 "adrfam": "IPv4", 00:13:05.544 "traddr": "10.0.0.2", 00:13:05.544 "trsvcid": "4420" 00:13:05.544 }, 00:13:05.544 "peer_address": { 00:13:05.544 "trtype": "TCP", 00:13:05.544 "adrfam": "IPv4", 00:13:05.544 "traddr": "10.0.0.1", 00:13:05.544 "trsvcid": "59970" 00:13:05.544 }, 00:13:05.544 "auth": { 00:13:05.544 "state": "completed", 00:13:05.544 "digest": "sha256", 00:13:05.544 "dhgroup": "ffdhe8192" 00:13:05.544 } 00:13:05.544 } 00:13:05.544 ]' 00:13:05.544 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.544 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:05.544 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.803 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:05.803 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.803 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.803 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.803 14:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.063 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:13:06.633 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.633 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:06.633 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.633 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.633 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.633 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.633 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:06.633 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:06.900 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:13:06.900 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.900 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:06.900 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:06.900 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:06.900 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.900 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.900 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.900 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.900 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.900 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.900 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.483 00:13:07.483 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.483 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.483 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.743 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.743 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.743 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.743 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.743 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.743 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.743 { 00:13:07.743 "cntlid": 45, 00:13:07.743 "qid": 0, 00:13:07.743 "state": "enabled", 00:13:07.743 "thread": "nvmf_tgt_poll_group_000", 00:13:07.743 "listen_address": { 00:13:07.743 "trtype": "TCP", 00:13:07.743 "adrfam": "IPv4", 00:13:07.743 "traddr": "10.0.0.2", 00:13:07.743 "trsvcid": "4420" 00:13:07.743 }, 00:13:07.743 "peer_address": { 00:13:07.743 "trtype": "TCP", 00:13:07.743 "adrfam": "IPv4", 00:13:07.743 "traddr": "10.0.0.1", 00:13:07.743 "trsvcid": "60008" 00:13:07.743 }, 00:13:07.743 "auth": { 00:13:07.743 "state": "completed", 00:13:07.743 "digest": "sha256", 00:13:07.743 "dhgroup": "ffdhe8192" 00:13:07.743 } 00:13:07.743 } 00:13:07.743 ]' 00:13:07.743 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.743 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.743 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.743 14:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:07.743 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.002 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.002 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.002 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.002 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:13:08.939 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.939 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:08.939 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.939 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.939 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.939 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.939 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:08.939 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:08.939 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:13:08.940 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:08.940 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:08.940 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:08.940 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:08.940 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.940 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:13:08.940 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.940 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.940 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.940 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:08.940 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:09.515 00:13:09.774 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:09.774 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.774 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:09.774 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.774 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.774 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.774 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.034 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.034 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.034 { 00:13:10.034 "cntlid": 47, 00:13:10.034 "qid": 0, 00:13:10.034 "state": "enabled", 00:13:10.034 "thread": "nvmf_tgt_poll_group_000", 00:13:10.034 "listen_address": { 00:13:10.034 "trtype": "TCP", 00:13:10.034 "adrfam": "IPv4", 00:13:10.034 "traddr": "10.0.0.2", 00:13:10.034 "trsvcid": "4420" 00:13:10.034 }, 00:13:10.034 "peer_address": { 00:13:10.034 "trtype": "TCP", 00:13:10.034 "adrfam": "IPv4", 00:13:10.034 "traddr": "10.0.0.1", 00:13:10.034 "trsvcid": "60040" 00:13:10.034 }, 00:13:10.034 "auth": { 00:13:10.034 "state": "completed", 00:13:10.034 "digest": "sha256", 00:13:10.034 "dhgroup": "ffdhe8192" 00:13:10.034 } 00:13:10.034 } 00:13:10.034 ]' 00:13:10.034 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:10.034 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:10.034 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.034 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:10.034 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.034 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.034 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.034 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.296 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:13:10.875 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.875 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:10.875 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.875 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.875 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.875 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:10.875 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.875 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:10.875 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:10.875 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:11.144 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:13:11.144 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:11.144 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:11.144 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:11.144 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:11.144 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.144 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.144 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.144 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.144 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.144 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.144 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.413 00:13:11.413 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:11.413 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.413 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:11.686 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.686 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.686 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.686 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.686 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.686 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:11.686 { 00:13:11.686 "cntlid": 49, 00:13:11.686 "qid": 0, 00:13:11.686 "state": "enabled", 00:13:11.686 "thread": "nvmf_tgt_poll_group_000", 00:13:11.686 "listen_address": { 00:13:11.686 "trtype": "TCP", 00:13:11.686 "adrfam": "IPv4", 00:13:11.686 "traddr": "10.0.0.2", 00:13:11.686 "trsvcid": "4420" 00:13:11.686 }, 00:13:11.686 "peer_address": { 00:13:11.686 "trtype": "TCP", 00:13:11.686 "adrfam": "IPv4", 00:13:11.686 "traddr": "10.0.0.1", 00:13:11.686 "trsvcid": "60054" 00:13:11.686 }, 00:13:11.686 "auth": { 00:13:11.686 "state": "completed", 00:13:11.686 "digest": "sha384", 00:13:11.686 "dhgroup": "null" 00:13:11.686 } 00:13:11.686 } 00:13:11.686 ]' 00:13:11.686 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.686 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.686 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.960 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:11.960 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.960 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.960 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.960 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.225 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:13:12.794 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.794 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:12.794 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.794 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.794 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.794 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.794 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:12.794 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:13.090 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:13:13.090 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:13.090 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:13.090 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:13.090 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:13.090 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.090 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.090 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.090 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.090 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.091 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.091 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.350 00:13:13.350 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.350 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.350 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.610 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.610 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.610 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.610 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.610 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.610 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.610 { 00:13:13.610 "cntlid": 51, 00:13:13.610 "qid": 0, 00:13:13.610 "state": "enabled", 00:13:13.610 "thread": "nvmf_tgt_poll_group_000", 00:13:13.610 "listen_address": { 00:13:13.610 "trtype": "TCP", 00:13:13.610 "adrfam": "IPv4", 00:13:13.610 "traddr": "10.0.0.2", 00:13:13.610 "trsvcid": "4420" 00:13:13.610 }, 00:13:13.610 "peer_address": { 00:13:13.610 "trtype": "TCP", 00:13:13.610 "adrfam": "IPv4", 00:13:13.610 "traddr": "10.0.0.1", 00:13:13.610 "trsvcid": "60076" 00:13:13.610 }, 00:13:13.610 "auth": { 00:13:13.610 "state": "completed", 00:13:13.610 "digest": "sha384", 00:13:13.610 "dhgroup": "null" 00:13:13.610 } 00:13:13.610 } 00:13:13.610 ]' 00:13:13.610 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.610 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:13.610 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.610 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:13.610 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.869 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.869 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.869 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.869 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:13:14.806 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.806 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:14.806 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.806 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.806 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.806 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.806 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:14.806 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:14.806 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:13:14.806 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.806 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:14.806 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:14.806 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:14.806 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.806 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.806 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.806 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.806 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.806 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.806 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.066 00:13:15.066 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.066 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:15.066 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.325 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.325 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.325 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.325 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.325 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.325 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.325 { 00:13:15.325 "cntlid": 53, 00:13:15.325 "qid": 0, 00:13:15.325 "state": "enabled", 00:13:15.325 "thread": "nvmf_tgt_poll_group_000", 00:13:15.325 "listen_address": { 00:13:15.325 "trtype": "TCP", 00:13:15.325 "adrfam": "IPv4", 00:13:15.325 "traddr": "10.0.0.2", 00:13:15.325 "trsvcid": "4420" 00:13:15.325 }, 00:13:15.325 "peer_address": { 00:13:15.325 "trtype": "TCP", 00:13:15.325 "adrfam": "IPv4", 00:13:15.325 "traddr": "10.0.0.1", 00:13:15.325 "trsvcid": "52146" 00:13:15.325 }, 00:13:15.325 "auth": { 00:13:15.325 "state": "completed", 00:13:15.325 "digest": "sha384", 00:13:15.325 "dhgroup": "null" 00:13:15.325 } 00:13:15.325 } 00:13:15.325 ]' 00:13:15.325 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.325 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:15.325 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.325 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:15.325 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.585 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.585 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.585 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.844 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:13:16.413 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.413 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:16.413 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.413 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.413 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.413 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.413 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:16.414 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:16.672 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:13:16.672 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.672 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:16.672 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:16.672 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:16.672 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.672 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:13:16.672 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.672 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.672 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.672 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.672 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.950 00:13:16.950 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:16.950 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.950 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.208 { 00:13:17.208 "cntlid": 55, 00:13:17.208 "qid": 0, 00:13:17.208 "state": "enabled", 00:13:17.208 "thread": "nvmf_tgt_poll_group_000", 00:13:17.208 "listen_address": { 00:13:17.208 "trtype": "TCP", 00:13:17.208 "adrfam": "IPv4", 00:13:17.208 "traddr": "10.0.0.2", 00:13:17.208 "trsvcid": "4420" 00:13:17.208 }, 00:13:17.208 "peer_address": { 00:13:17.208 "trtype": "TCP", 00:13:17.208 "adrfam": "IPv4", 00:13:17.208 "traddr": "10.0.0.1", 00:13:17.208 "trsvcid": "52180" 00:13:17.208 }, 00:13:17.208 "auth": { 00:13:17.208 "state": "completed", 00:13:17.208 "digest": "sha384", 00:13:17.208 "dhgroup": "null" 00:13:17.208 } 00:13:17.208 } 00:13:17.208 ]' 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.208 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.467 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.402 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.660 00:13:18.660 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:18.660 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:18.660 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.919 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.919 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.919 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.919 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.919 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.919 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:18.919 { 00:13:18.919 "cntlid": 57, 00:13:18.919 "qid": 0, 00:13:18.919 "state": "enabled", 00:13:18.919 "thread": "nvmf_tgt_poll_group_000", 00:13:18.919 "listen_address": { 00:13:18.919 "trtype": "TCP", 00:13:18.919 "adrfam": "IPv4", 00:13:18.919 "traddr": "10.0.0.2", 00:13:18.919 "trsvcid": "4420" 00:13:18.919 }, 00:13:18.919 "peer_address": { 00:13:18.919 "trtype": "TCP", 00:13:18.919 "adrfam": "IPv4", 00:13:18.919 "traddr": "10.0.0.1", 00:13:18.919 "trsvcid": "52214" 00:13:18.919 }, 00:13:18.919 "auth": { 00:13:18.919 "state": "completed", 00:13:18.919 "digest": "sha384", 00:13:18.919 "dhgroup": "ffdhe2048" 00:13:18.919 } 00:13:18.919 } 00:13:18.919 ]' 00:13:18.919 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.179 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:19.179 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.179 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:19.179 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.179 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.179 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.179 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.438 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:13:20.004 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.004 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:20.004 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.004 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.004 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.004 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.004 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:20.004 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:20.298 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:13:20.298 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:20.298 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:20.298 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:20.298 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:20.298 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.298 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.298 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.298 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.298 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.298 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.298 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.558 00:13:20.817 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:20.817 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.817 14:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:20.817 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.817 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.817 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.817 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.817 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.817 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:20.817 { 00:13:20.817 "cntlid": 59, 00:13:20.817 "qid": 0, 00:13:20.817 "state": "enabled", 00:13:20.817 "thread": "nvmf_tgt_poll_group_000", 00:13:20.817 "listen_address": { 00:13:20.817 "trtype": "TCP", 00:13:20.817 "adrfam": "IPv4", 00:13:20.817 "traddr": "10.0.0.2", 00:13:20.817 "trsvcid": "4420" 00:13:20.817 }, 00:13:20.817 "peer_address": { 00:13:20.817 "trtype": "TCP", 00:13:20.817 "adrfam": "IPv4", 00:13:20.817 "traddr": "10.0.0.1", 00:13:20.817 "trsvcid": "52238" 00:13:20.817 }, 00:13:20.817 "auth": { 00:13:20.817 "state": "completed", 00:13:20.817 "digest": "sha384", 00:13:20.817 "dhgroup": "ffdhe2048" 00:13:20.817 } 00:13:20.817 } 00:13:20.817 ]' 00:13:21.076 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.076 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:21.076 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.076 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:21.076 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.076 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.076 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.076 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.334 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.270 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.528 00:13:22.787 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:22.787 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.787 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:22.787 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.787 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.787 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.787 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.787 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.787 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:22.787 { 00:13:22.787 "cntlid": 61, 00:13:22.787 "qid": 0, 00:13:22.787 "state": "enabled", 00:13:22.787 "thread": "nvmf_tgt_poll_group_000", 00:13:22.787 "listen_address": { 00:13:22.787 "trtype": "TCP", 00:13:22.787 "adrfam": "IPv4", 00:13:22.787 "traddr": "10.0.0.2", 00:13:22.787 "trsvcid": "4420" 00:13:22.787 }, 00:13:22.787 "peer_address": { 00:13:22.787 "trtype": "TCP", 00:13:22.787 "adrfam": "IPv4", 00:13:22.787 "traddr": "10.0.0.1", 00:13:22.787 "trsvcid": "52254" 00:13:22.787 }, 00:13:22.787 "auth": { 00:13:22.787 "state": "completed", 00:13:22.787 "digest": "sha384", 00:13:22.787 "dhgroup": "ffdhe2048" 00:13:22.787 } 00:13:22.787 } 00:13:22.787 ]' 00:13:22.787 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.046 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:23.046 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.046 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:23.046 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.046 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.046 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.046 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.305 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:13:23.872 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.872 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:23.872 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.872 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.872 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.872 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.872 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:23.872 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:24.131 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:13:24.131 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.131 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:24.131 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:24.131 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:24.131 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.131 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:13:24.131 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.131 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.131 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.131 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.131 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.389 00:13:24.389 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.389 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.389 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.647 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.647 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.647 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.647 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.647 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.647 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.647 { 00:13:24.647 "cntlid": 63, 00:13:24.647 "qid": 0, 00:13:24.647 "state": "enabled", 00:13:24.647 "thread": "nvmf_tgt_poll_group_000", 00:13:24.647 "listen_address": { 00:13:24.647 "trtype": "TCP", 00:13:24.647 "adrfam": "IPv4", 00:13:24.647 "traddr": "10.0.0.2", 00:13:24.647 "trsvcid": "4420" 00:13:24.647 }, 00:13:24.647 "peer_address": { 00:13:24.647 "trtype": "TCP", 00:13:24.647 "adrfam": "IPv4", 00:13:24.647 "traddr": "10.0.0.1", 00:13:24.647 "trsvcid": "44684" 00:13:24.647 }, 00:13:24.647 "auth": { 00:13:24.647 "state": "completed", 00:13:24.647 "digest": "sha384", 00:13:24.647 "dhgroup": "ffdhe2048" 00:13:24.647 } 00:13:24.647 } 00:13:24.647 ]' 00:13:24.647 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.906 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:24.906 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.906 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:24.906 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.906 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.906 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.906 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.165 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:13:25.732 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.732 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:25.732 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.732 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.732 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.732 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:25.732 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.732 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:25.732 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:25.990 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:13:25.990 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.990 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:25.990 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:25.990 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:25.990 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.990 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.990 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.990 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.990 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.990 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.990 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.249 00:13:26.249 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.249 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.249 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.508 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.508 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.508 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.508 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.508 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.508 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.508 { 00:13:26.508 "cntlid": 65, 00:13:26.508 "qid": 0, 00:13:26.508 "state": "enabled", 00:13:26.508 "thread": "nvmf_tgt_poll_group_000", 00:13:26.508 "listen_address": { 00:13:26.508 "trtype": "TCP", 00:13:26.508 "adrfam": "IPv4", 00:13:26.508 "traddr": "10.0.0.2", 00:13:26.508 "trsvcid": "4420" 00:13:26.508 }, 00:13:26.508 "peer_address": { 00:13:26.508 "trtype": "TCP", 00:13:26.508 "adrfam": "IPv4", 00:13:26.508 "traddr": "10.0.0.1", 00:13:26.508 "trsvcid": "44704" 00:13:26.508 }, 00:13:26.508 "auth": { 00:13:26.508 "state": "completed", 00:13:26.508 "digest": "sha384", 00:13:26.508 "dhgroup": "ffdhe3072" 00:13:26.508 } 00:13:26.508 } 00:13:26.508 ]' 00:13:26.508 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.508 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:26.508 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.767 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:26.767 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.767 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.767 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.767 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.027 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:13:27.595 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.595 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:27.595 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.595 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.595 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.595 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.595 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:27.595 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:27.854 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:13:27.855 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.855 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:27.855 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:27.855 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:27.855 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.855 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.855 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.855 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.855 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.855 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.855 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.113 00:13:28.113 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.113 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.114 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.373 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.373 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.373 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.373 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.373 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.373 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.373 { 00:13:28.373 "cntlid": 67, 00:13:28.373 "qid": 0, 00:13:28.373 "state": "enabled", 00:13:28.373 "thread": "nvmf_tgt_poll_group_000", 00:13:28.373 "listen_address": { 00:13:28.373 "trtype": "TCP", 00:13:28.373 "adrfam": "IPv4", 00:13:28.373 "traddr": "10.0.0.2", 00:13:28.373 "trsvcid": "4420" 00:13:28.373 }, 00:13:28.373 "peer_address": { 00:13:28.373 "trtype": "TCP", 00:13:28.373 "adrfam": "IPv4", 00:13:28.373 "traddr": "10.0.0.1", 00:13:28.373 "trsvcid": "44718" 00:13:28.373 }, 00:13:28.373 "auth": { 00:13:28.373 "state": "completed", 00:13:28.373 "digest": "sha384", 00:13:28.373 "dhgroup": "ffdhe3072" 00:13:28.373 } 00:13:28.373 } 00:13:28.373 ]' 00:13:28.373 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.373 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:28.373 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.632 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:28.632 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.632 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.632 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.632 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.891 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:13:29.460 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.460 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:29.460 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.460 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.460 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.460 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.460 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:29.460 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:29.719 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:13:29.719 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:29.719 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:29.719 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:29.719 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:29.719 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.719 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.719 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.719 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.719 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.719 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.720 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.979 00:13:29.979 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.979 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.979 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.238 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.238 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.238 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.238 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.238 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.238 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:30.238 { 00:13:30.238 "cntlid": 69, 00:13:30.238 "qid": 0, 00:13:30.238 "state": "enabled", 00:13:30.238 "thread": "nvmf_tgt_poll_group_000", 00:13:30.238 "listen_address": { 00:13:30.238 "trtype": "TCP", 00:13:30.238 "adrfam": "IPv4", 00:13:30.238 "traddr": "10.0.0.2", 00:13:30.238 "trsvcid": "4420" 00:13:30.238 }, 00:13:30.238 "peer_address": { 00:13:30.238 "trtype": "TCP", 00:13:30.238 "adrfam": "IPv4", 00:13:30.238 "traddr": "10.0.0.1", 00:13:30.238 "trsvcid": "44744" 00:13:30.238 }, 00:13:30.238 "auth": { 00:13:30.238 "state": "completed", 00:13:30.238 "digest": "sha384", 00:13:30.238 "dhgroup": "ffdhe3072" 00:13:30.238 } 00:13:30.238 } 00:13:30.238 ]' 00:13:30.238 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:30.497 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:30.497 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.497 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:30.497 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.497 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.497 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.497 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.756 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:13:31.326 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.585 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:31.585 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.585 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.585 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.585 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.585 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:31.585 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:31.844 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:13:31.844 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:31.844 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:31.844 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:31.844 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:31.844 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.844 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:13:31.844 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.844 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.844 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.844 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:31.844 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:32.103 00:13:32.103 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.103 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:32.103 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.362 { 00:13:32.362 "cntlid": 71, 00:13:32.362 "qid": 0, 00:13:32.362 "state": "enabled", 00:13:32.362 "thread": "nvmf_tgt_poll_group_000", 00:13:32.362 "listen_address": { 00:13:32.362 "trtype": "TCP", 00:13:32.362 "adrfam": "IPv4", 00:13:32.362 "traddr": "10.0.0.2", 00:13:32.362 "trsvcid": "4420" 00:13:32.362 }, 00:13:32.362 "peer_address": { 00:13:32.362 "trtype": "TCP", 00:13:32.362 "adrfam": "IPv4", 00:13:32.362 "traddr": "10.0.0.1", 00:13:32.362 "trsvcid": "44768" 00:13:32.362 }, 00:13:32.362 "auth": { 00:13:32.362 "state": "completed", 00:13:32.362 "digest": "sha384", 00:13:32.362 "dhgroup": "ffdhe3072" 00:13:32.362 } 00:13:32.362 } 00:13:32.362 ]' 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.362 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.620 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.575 14:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.835 00:13:34.094 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.094 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.094 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.094 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.094 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.094 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.094 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.353 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.353 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.353 { 00:13:34.353 "cntlid": 73, 00:13:34.353 "qid": 0, 00:13:34.354 "state": "enabled", 00:13:34.354 "thread": "nvmf_tgt_poll_group_000", 00:13:34.354 "listen_address": { 00:13:34.354 "trtype": "TCP", 00:13:34.354 "adrfam": "IPv4", 00:13:34.354 "traddr": "10.0.0.2", 00:13:34.354 "trsvcid": "4420" 00:13:34.354 }, 00:13:34.354 "peer_address": { 00:13:34.354 "trtype": "TCP", 00:13:34.354 "adrfam": "IPv4", 00:13:34.354 "traddr": "10.0.0.1", 00:13:34.354 "trsvcid": "46624" 00:13:34.354 }, 00:13:34.354 "auth": { 00:13:34.354 "state": "completed", 00:13:34.354 "digest": "sha384", 00:13:34.354 "dhgroup": "ffdhe4096" 00:13:34.354 } 00:13:34.354 } 00:13:34.354 ]' 00:13:34.354 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.354 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:34.354 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.354 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:34.354 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.354 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.354 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.354 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.613 14:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:13:35.183 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.183 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:35.183 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.183 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.183 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.183 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.183 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:35.183 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:35.442 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:13:35.442 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.442 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:35.442 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:35.442 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:35.442 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.442 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.442 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.442 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.442 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.442 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.442 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.014 00:13:36.014 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.014 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.014 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.271 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.271 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.271 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.271 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.271 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.271 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.271 { 00:13:36.271 "cntlid": 75, 00:13:36.271 "qid": 0, 00:13:36.272 "state": "enabled", 00:13:36.272 "thread": "nvmf_tgt_poll_group_000", 00:13:36.272 "listen_address": { 00:13:36.272 "trtype": "TCP", 00:13:36.272 "adrfam": "IPv4", 00:13:36.272 "traddr": "10.0.0.2", 00:13:36.272 "trsvcid": "4420" 00:13:36.272 }, 00:13:36.272 "peer_address": { 00:13:36.272 "trtype": "TCP", 00:13:36.272 "adrfam": "IPv4", 00:13:36.272 "traddr": "10.0.0.1", 00:13:36.272 "trsvcid": "46656" 00:13:36.272 }, 00:13:36.272 "auth": { 00:13:36.272 "state": "completed", 00:13:36.272 "digest": "sha384", 00:13:36.272 "dhgroup": "ffdhe4096" 00:13:36.272 } 00:13:36.272 } 00:13:36.272 ]' 00:13:36.272 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.272 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:36.272 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:36.272 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:36.272 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:36.272 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.272 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.272 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.529 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:13:37.514 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.515 14:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.081 00:13:38.081 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:38.081 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:38.081 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.081 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.081 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.081 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.081 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.082 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.082 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.082 { 00:13:38.082 "cntlid": 77, 00:13:38.082 "qid": 0, 00:13:38.082 "state": "enabled", 00:13:38.082 "thread": "nvmf_tgt_poll_group_000", 00:13:38.082 "listen_address": { 00:13:38.082 "trtype": "TCP", 00:13:38.082 "adrfam": "IPv4", 00:13:38.082 "traddr": "10.0.0.2", 00:13:38.082 "trsvcid": "4420" 00:13:38.082 }, 00:13:38.082 "peer_address": { 00:13:38.082 "trtype": "TCP", 00:13:38.082 "adrfam": "IPv4", 00:13:38.082 "traddr": "10.0.0.1", 00:13:38.082 "trsvcid": "46682" 00:13:38.082 }, 00:13:38.082 "auth": { 00:13:38.082 "state": "completed", 00:13:38.082 "digest": "sha384", 00:13:38.082 "dhgroup": "ffdhe4096" 00:13:38.082 } 00:13:38.082 } 00:13:38.082 ]' 00:13:38.082 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.341 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.341 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.341 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:38.341 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.341 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.341 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.341 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.600 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:13:39.170 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:39.428 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:39.992 00:13:39.992 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.992 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.992 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.992 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.992 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.992 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.992 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.992 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.992 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.992 { 00:13:39.992 "cntlid": 79, 00:13:39.992 "qid": 0, 00:13:39.992 "state": "enabled", 00:13:39.992 "thread": "nvmf_tgt_poll_group_000", 00:13:39.992 "listen_address": { 00:13:39.992 "trtype": "TCP", 00:13:39.992 "adrfam": "IPv4", 00:13:39.992 "traddr": "10.0.0.2", 00:13:39.992 "trsvcid": "4420" 00:13:39.992 }, 00:13:39.992 "peer_address": { 00:13:39.992 "trtype": "TCP", 00:13:39.992 "adrfam": "IPv4", 00:13:39.992 "traddr": "10.0.0.1", 00:13:39.992 "trsvcid": "46706" 00:13:39.992 }, 00:13:39.992 "auth": { 00:13:39.992 "state": "completed", 00:13:39.992 "digest": "sha384", 00:13:39.992 "dhgroup": "ffdhe4096" 00:13:39.992 } 00:13:39.992 } 00:13:39.992 ]' 00:13:39.992 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:40.276 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:40.276 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:40.276 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:40.276 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:40.276 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.276 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.277 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.553 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.119 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.378 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.378 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.378 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.636 00:13:41.636 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.636 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.636 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.894 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.894 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.894 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.894 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.894 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.894 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.894 { 00:13:41.894 "cntlid": 81, 00:13:41.894 "qid": 0, 00:13:41.894 "state": "enabled", 00:13:41.894 "thread": "nvmf_tgt_poll_group_000", 00:13:41.894 "listen_address": { 00:13:41.895 "trtype": "TCP", 00:13:41.895 "adrfam": "IPv4", 00:13:41.895 "traddr": "10.0.0.2", 00:13:41.895 "trsvcid": "4420" 00:13:41.895 }, 00:13:41.895 "peer_address": { 00:13:41.895 "trtype": "TCP", 00:13:41.895 "adrfam": "IPv4", 00:13:41.895 "traddr": "10.0.0.1", 00:13:41.895 "trsvcid": "46724" 00:13:41.895 }, 00:13:41.895 "auth": { 00:13:41.895 "state": "completed", 00:13:41.895 "digest": "sha384", 00:13:41.895 "dhgroup": "ffdhe6144" 00:13:41.895 } 00:13:41.895 } 00:13:41.895 ]' 00:13:41.895 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.895 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.895 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.895 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:41.895 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.895 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.895 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.895 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.153 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.088 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.655 00:13:43.655 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.655 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.655 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.914 { 00:13:43.914 "cntlid": 83, 00:13:43.914 "qid": 0, 00:13:43.914 "state": "enabled", 00:13:43.914 "thread": "nvmf_tgt_poll_group_000", 00:13:43.914 "listen_address": { 00:13:43.914 "trtype": "TCP", 00:13:43.914 "adrfam": "IPv4", 00:13:43.914 "traddr": "10.0.0.2", 00:13:43.914 "trsvcid": "4420" 00:13:43.914 }, 00:13:43.914 "peer_address": { 00:13:43.914 "trtype": "TCP", 00:13:43.914 "adrfam": "IPv4", 00:13:43.914 "traddr": "10.0.0.1", 00:13:43.914 "trsvcid": "54770" 00:13:43.914 }, 00:13:43.914 "auth": { 00:13:43.914 "state": "completed", 00:13:43.914 "digest": "sha384", 00:13:43.914 "dhgroup": "ffdhe6144" 00:13:43.914 } 00:13:43.914 } 00:13:43.914 ]' 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.914 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.174 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.111 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.681 00:13:45.681 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.681 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.681 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.940 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.940 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.940 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.940 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.940 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.940 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.940 { 00:13:45.940 "cntlid": 85, 00:13:45.940 "qid": 0, 00:13:45.941 "state": "enabled", 00:13:45.941 "thread": "nvmf_tgt_poll_group_000", 00:13:45.941 "listen_address": { 00:13:45.941 "trtype": "TCP", 00:13:45.941 "adrfam": "IPv4", 00:13:45.941 "traddr": "10.0.0.2", 00:13:45.941 "trsvcid": "4420" 00:13:45.941 }, 00:13:45.941 "peer_address": { 00:13:45.941 "trtype": "TCP", 00:13:45.941 "adrfam": "IPv4", 00:13:45.941 "traddr": "10.0.0.1", 00:13:45.941 "trsvcid": "54786" 00:13:45.941 }, 00:13:45.941 "auth": { 00:13:45.941 "state": "completed", 00:13:45.941 "digest": "sha384", 00:13:45.941 "dhgroup": "ffdhe6144" 00:13:45.941 } 00:13:45.941 } 00:13:45.941 ]' 00:13:45.941 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.941 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:45.941 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.941 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:45.941 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.941 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.941 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.941 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.211 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:13:46.798 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.798 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:46.798 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.798 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.798 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.798 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:46.798 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:46.798 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:47.366 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:13:47.366 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:47.366 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:47.366 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:47.366 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:47.366 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.366 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:13:47.366 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.366 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.366 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.366 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:47.366 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:47.625 00:13:47.625 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:47.625 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.625 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:47.885 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.885 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.885 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.885 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.885 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.885 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.885 { 00:13:47.885 "cntlid": 87, 00:13:47.885 "qid": 0, 00:13:47.885 "state": "enabled", 00:13:47.885 "thread": "nvmf_tgt_poll_group_000", 00:13:47.885 "listen_address": { 00:13:47.885 "trtype": "TCP", 00:13:47.885 "adrfam": "IPv4", 00:13:47.885 "traddr": "10.0.0.2", 00:13:47.885 "trsvcid": "4420" 00:13:47.885 }, 00:13:47.885 "peer_address": { 00:13:47.885 "trtype": "TCP", 00:13:47.885 "adrfam": "IPv4", 00:13:47.885 "traddr": "10.0.0.1", 00:13:47.885 "trsvcid": "54806" 00:13:47.885 }, 00:13:47.885 "auth": { 00:13:47.885 "state": "completed", 00:13:47.885 "digest": "sha384", 00:13:47.885 "dhgroup": "ffdhe6144" 00:13:47.885 } 00:13:47.885 } 00:13:47.885 ]' 00:13:47.885 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.885 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:47.885 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.885 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:47.885 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:48.144 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.144 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.144 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.144 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.083 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.651 00:13:49.651 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:49.651 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:49.651 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.909 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.909 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.909 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.909 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.909 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.909 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.909 { 00:13:49.909 "cntlid": 89, 00:13:49.909 "qid": 0, 00:13:49.909 "state": "enabled", 00:13:49.909 "thread": "nvmf_tgt_poll_group_000", 00:13:49.909 "listen_address": { 00:13:49.909 "trtype": "TCP", 00:13:49.909 "adrfam": "IPv4", 00:13:49.909 "traddr": "10.0.0.2", 00:13:49.909 "trsvcid": "4420" 00:13:49.909 }, 00:13:49.909 "peer_address": { 00:13:49.909 "trtype": "TCP", 00:13:49.909 "adrfam": "IPv4", 00:13:49.909 "traddr": "10.0.0.1", 00:13:49.909 "trsvcid": "54834" 00:13:49.909 }, 00:13:49.909 "auth": { 00:13:49.909 "state": "completed", 00:13:49.909 "digest": "sha384", 00:13:49.909 "dhgroup": "ffdhe8192" 00:13:49.909 } 00:13:49.909 } 00:13:49.909 ]' 00:13:49.909 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.909 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:49.909 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:50.168 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:50.168 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:50.168 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.168 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.168 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.427 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:13:50.994 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.994 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:50.994 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.994 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.994 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.994 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.994 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:50.994 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:51.254 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:13:51.254 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:51.254 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:51.254 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:51.254 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:51.254 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.254 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.254 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.254 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.254 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.254 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.254 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.859 00:13:51.859 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.859 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.859 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:52.137 { 00:13:52.137 "cntlid": 91, 00:13:52.137 "qid": 0, 00:13:52.137 "state": "enabled", 00:13:52.137 "thread": "nvmf_tgt_poll_group_000", 00:13:52.137 "listen_address": { 00:13:52.137 "trtype": "TCP", 00:13:52.137 "adrfam": "IPv4", 00:13:52.137 "traddr": "10.0.0.2", 00:13:52.137 "trsvcid": "4420" 00:13:52.137 }, 00:13:52.137 "peer_address": { 00:13:52.137 "trtype": "TCP", 00:13:52.137 "adrfam": "IPv4", 00:13:52.137 "traddr": "10.0.0.1", 00:13:52.137 "trsvcid": "54860" 00:13:52.137 }, 00:13:52.137 "auth": { 00:13:52.137 "state": "completed", 00:13:52.137 "digest": "sha384", 00:13:52.137 "dhgroup": "ffdhe8192" 00:13:52.137 } 00:13:52.137 } 00:13:52.137 ]' 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.137 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.397 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:13:52.965 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.965 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:52.965 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.965 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.965 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.965 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.965 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:52.965 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:53.224 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:13:53.224 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:53.224 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:53.224 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:53.224 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:53.224 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.224 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.224 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.224 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.224 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.224 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.224 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.791 00:13:53.791 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.791 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.791 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.050 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.050 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.050 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.050 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.050 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.050 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:54.050 { 00:13:54.050 "cntlid": 93, 00:13:54.050 "qid": 0, 00:13:54.050 "state": "enabled", 00:13:54.050 "thread": "nvmf_tgt_poll_group_000", 00:13:54.050 "listen_address": { 00:13:54.050 "trtype": "TCP", 00:13:54.050 "adrfam": "IPv4", 00:13:54.050 "traddr": "10.0.0.2", 00:13:54.050 "trsvcid": "4420" 00:13:54.050 }, 00:13:54.050 "peer_address": { 00:13:54.050 "trtype": "TCP", 00:13:54.050 "adrfam": "IPv4", 00:13:54.050 "traddr": "10.0.0.1", 00:13:54.050 "trsvcid": "58518" 00:13:54.050 }, 00:13:54.050 "auth": { 00:13:54.050 "state": "completed", 00:13:54.050 "digest": "sha384", 00:13:54.050 "dhgroup": "ffdhe8192" 00:13:54.050 } 00:13:54.050 } 00:13:54.050 ]' 00:13:54.050 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:54.309 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:54.309 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:54.309 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:54.309 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:54.309 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.309 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.309 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.568 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:13:55.183 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.183 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:55.183 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.183 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.183 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.183 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.183 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:55.183 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:55.452 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:13:55.452 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:55.452 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:55.452 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:55.452 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:55.452 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.452 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:13:55.452 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.452 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.452 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.452 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:55.452 14:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:56.020 00:13:56.020 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.020 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.020 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.280 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.280 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.280 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.280 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.280 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.280 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:56.280 { 00:13:56.280 "cntlid": 95, 00:13:56.280 "qid": 0, 00:13:56.280 "state": "enabled", 00:13:56.280 "thread": "nvmf_tgt_poll_group_000", 00:13:56.280 "listen_address": { 00:13:56.280 "trtype": "TCP", 00:13:56.280 "adrfam": "IPv4", 00:13:56.280 "traddr": "10.0.0.2", 00:13:56.280 "trsvcid": "4420" 00:13:56.280 }, 00:13:56.280 "peer_address": { 00:13:56.280 "trtype": "TCP", 00:13:56.280 "adrfam": "IPv4", 00:13:56.280 "traddr": "10.0.0.1", 00:13:56.280 "trsvcid": "58542" 00:13:56.280 }, 00:13:56.280 "auth": { 00:13:56.280 "state": "completed", 00:13:56.280 "digest": "sha384", 00:13:56.280 "dhgroup": "ffdhe8192" 00:13:56.280 } 00:13:56.280 } 00:13:56.280 ]' 00:13:56.280 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.280 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:56.280 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.280 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:56.280 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.539 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.539 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.539 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.539 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.477 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.736 00:13:57.736 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.736 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.736 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.995 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.995 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.995 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.995 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.995 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.995 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.995 { 00:13:57.995 "cntlid": 97, 00:13:57.995 "qid": 0, 00:13:57.995 "state": "enabled", 00:13:57.995 "thread": "nvmf_tgt_poll_group_000", 00:13:57.995 "listen_address": { 00:13:57.995 "trtype": "TCP", 00:13:57.995 "adrfam": "IPv4", 00:13:57.995 "traddr": "10.0.0.2", 00:13:57.995 "trsvcid": "4420" 00:13:57.995 }, 00:13:57.995 "peer_address": { 00:13:57.995 "trtype": "TCP", 00:13:57.995 "adrfam": "IPv4", 00:13:57.995 "traddr": "10.0.0.1", 00:13:57.995 "trsvcid": "58554" 00:13:57.995 }, 00:13:57.995 "auth": { 00:13:57.995 "state": "completed", 00:13:57.995 "digest": "sha512", 00:13:57.995 "dhgroup": "null" 00:13:57.995 } 00:13:57.995 } 00:13:57.995 ]' 00:13:57.995 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:58.255 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:58.255 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:58.255 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:58.255 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:58.255 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.255 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.255 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.513 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:13:59.081 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.081 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:13:59.081 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.081 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.081 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.081 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.081 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:59.081 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:59.341 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:13:59.341 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.341 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:59.341 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:59.341 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:59.341 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.341 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.341 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.341 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.341 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.341 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.341 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.908 00:13:59.908 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.908 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.908 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.167 { 00:14:00.167 "cntlid": 99, 00:14:00.167 "qid": 0, 00:14:00.167 "state": "enabled", 00:14:00.167 "thread": "nvmf_tgt_poll_group_000", 00:14:00.167 "listen_address": { 00:14:00.167 "trtype": "TCP", 00:14:00.167 "adrfam": "IPv4", 00:14:00.167 "traddr": "10.0.0.2", 00:14:00.167 "trsvcid": "4420" 00:14:00.167 }, 00:14:00.167 "peer_address": { 00:14:00.167 "trtype": "TCP", 00:14:00.167 "adrfam": "IPv4", 00:14:00.167 "traddr": "10.0.0.1", 00:14:00.167 "trsvcid": "58580" 00:14:00.167 }, 00:14:00.167 "auth": { 00:14:00.167 "state": "completed", 00:14:00.167 "digest": "sha512", 00:14:00.167 "dhgroup": "null" 00:14:00.167 } 00:14:00.167 } 00:14:00.167 ]' 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.167 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.425 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:14:00.992 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.992 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:00.992 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.992 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.250 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.509 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.509 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.509 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.767 00:14:01.767 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.767 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.767 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.084 { 00:14:02.084 "cntlid": 101, 00:14:02.084 "qid": 0, 00:14:02.084 "state": "enabled", 00:14:02.084 "thread": "nvmf_tgt_poll_group_000", 00:14:02.084 "listen_address": { 00:14:02.084 "trtype": "TCP", 00:14:02.084 "adrfam": "IPv4", 00:14:02.084 "traddr": "10.0.0.2", 00:14:02.084 "trsvcid": "4420" 00:14:02.084 }, 00:14:02.084 "peer_address": { 00:14:02.084 "trtype": "TCP", 00:14:02.084 "adrfam": "IPv4", 00:14:02.084 "traddr": "10.0.0.1", 00:14:02.084 "trsvcid": "58604" 00:14:02.084 }, 00:14:02.084 "auth": { 00:14:02.084 "state": "completed", 00:14:02.084 "digest": "sha512", 00:14:02.084 "dhgroup": "null" 00:14:02.084 } 00:14:02.084 } 00:14:02.084 ]' 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.084 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.342 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:14:02.907 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.907 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:02.907 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.907 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.907 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.907 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.907 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:02.907 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:03.165 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:14:03.165 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.165 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:03.165 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:03.165 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:03.165 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.165 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:14:03.165 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.165 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.165 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.165 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:03.165 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:03.423 00:14:03.682 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.682 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.682 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.942 { 00:14:03.942 "cntlid": 103, 00:14:03.942 "qid": 0, 00:14:03.942 "state": "enabled", 00:14:03.942 "thread": "nvmf_tgt_poll_group_000", 00:14:03.942 "listen_address": { 00:14:03.942 "trtype": "TCP", 00:14:03.942 "adrfam": "IPv4", 00:14:03.942 "traddr": "10.0.0.2", 00:14:03.942 "trsvcid": "4420" 00:14:03.942 }, 00:14:03.942 "peer_address": { 00:14:03.942 "trtype": "TCP", 00:14:03.942 "adrfam": "IPv4", 00:14:03.942 "traddr": "10.0.0.1", 00:14:03.942 "trsvcid": "45888" 00:14:03.942 }, 00:14:03.942 "auth": { 00:14:03.942 "state": "completed", 00:14:03.942 "digest": "sha512", 00:14:03.942 "dhgroup": "null" 00:14:03.942 } 00:14:03.942 } 00:14:03.942 ]' 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.942 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.202 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:14:04.770 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.770 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:04.770 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.770 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.770 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.770 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.770 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.770 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:04.770 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:05.030 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:14:05.030 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.030 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:05.030 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:05.030 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:05.030 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.030 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.030 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.030 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.030 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.030 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.030 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.598 00:14:05.598 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.598 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.598 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.598 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.598 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.598 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.598 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.598 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.598 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.598 { 00:14:05.598 "cntlid": 105, 00:14:05.598 "qid": 0, 00:14:05.598 "state": "enabled", 00:14:05.598 "thread": "nvmf_tgt_poll_group_000", 00:14:05.598 "listen_address": { 00:14:05.598 "trtype": "TCP", 00:14:05.598 "adrfam": "IPv4", 00:14:05.598 "traddr": "10.0.0.2", 00:14:05.598 "trsvcid": "4420" 00:14:05.598 }, 00:14:05.598 "peer_address": { 00:14:05.598 "trtype": "TCP", 00:14:05.598 "adrfam": "IPv4", 00:14:05.598 "traddr": "10.0.0.1", 00:14:05.598 "trsvcid": "45914" 00:14:05.598 }, 00:14:05.598 "auth": { 00:14:05.598 "state": "completed", 00:14:05.598 "digest": "sha512", 00:14:05.598 "dhgroup": "ffdhe2048" 00:14:05.598 } 00:14:05.598 } 00:14:05.598 ]' 00:14:05.598 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.856 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:05.856 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.856 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:05.856 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.856 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.856 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.856 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.116 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:14:06.685 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.685 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:06.685 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.685 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.685 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.685 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.685 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:06.685 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:06.944 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:14:06.944 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.944 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:06.944 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:06.944 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:06.944 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.944 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.944 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.944 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.944 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.944 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.944 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.203 00:14:07.203 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.203 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.203 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.514 { 00:14:07.514 "cntlid": 107, 00:14:07.514 "qid": 0, 00:14:07.514 "state": "enabled", 00:14:07.514 "thread": "nvmf_tgt_poll_group_000", 00:14:07.514 "listen_address": { 00:14:07.514 "trtype": "TCP", 00:14:07.514 "adrfam": "IPv4", 00:14:07.514 "traddr": "10.0.0.2", 00:14:07.514 "trsvcid": "4420" 00:14:07.514 }, 00:14:07.514 "peer_address": { 00:14:07.514 "trtype": "TCP", 00:14:07.514 "adrfam": "IPv4", 00:14:07.514 "traddr": "10.0.0.1", 00:14:07.514 "trsvcid": "45950" 00:14:07.514 }, 00:14:07.514 "auth": { 00:14:07.514 "state": "completed", 00:14:07.514 "digest": "sha512", 00:14:07.514 "dhgroup": "ffdhe2048" 00:14:07.514 } 00:14:07.514 } 00:14:07.514 ]' 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.514 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.808 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:14:08.376 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.376 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:08.376 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.376 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.376 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.376 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.376 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:08.376 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:08.635 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:14:08.635 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.635 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:08.635 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:08.635 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:08.635 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.635 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.635 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.635 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.635 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.635 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.635 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.895 00:14:08.895 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.895 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.895 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.154 { 00:14:09.154 "cntlid": 109, 00:14:09.154 "qid": 0, 00:14:09.154 "state": "enabled", 00:14:09.154 "thread": "nvmf_tgt_poll_group_000", 00:14:09.154 "listen_address": { 00:14:09.154 "trtype": "TCP", 00:14:09.154 "adrfam": "IPv4", 00:14:09.154 "traddr": "10.0.0.2", 00:14:09.154 "trsvcid": "4420" 00:14:09.154 }, 00:14:09.154 "peer_address": { 00:14:09.154 "trtype": "TCP", 00:14:09.154 "adrfam": "IPv4", 00:14:09.154 "traddr": "10.0.0.1", 00:14:09.154 "trsvcid": "45986" 00:14:09.154 }, 00:14:09.154 "auth": { 00:14:09.154 "state": "completed", 00:14:09.154 "digest": "sha512", 00:14:09.154 "dhgroup": "ffdhe2048" 00:14:09.154 } 00:14:09.154 } 00:14:09.154 ]' 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.154 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.414 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:14:09.983 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.983 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:09.983 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.983 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.243 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.243 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.243 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:10.243 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:10.243 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:14:10.244 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.244 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:10.244 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:10.244 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:10.244 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.244 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:14:10.244 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.244 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.244 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.244 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:10.244 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:10.502 00:14:10.761 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.761 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.761 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.761 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.761 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.761 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.761 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.761 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.761 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.761 { 00:14:10.761 "cntlid": 111, 00:14:10.761 "qid": 0, 00:14:10.761 "state": "enabled", 00:14:10.761 "thread": "nvmf_tgt_poll_group_000", 00:14:10.761 "listen_address": { 00:14:10.761 "trtype": "TCP", 00:14:10.761 "adrfam": "IPv4", 00:14:10.761 "traddr": "10.0.0.2", 00:14:10.761 "trsvcid": "4420" 00:14:10.761 }, 00:14:10.761 "peer_address": { 00:14:10.761 "trtype": "TCP", 00:14:10.761 "adrfam": "IPv4", 00:14:10.761 "traddr": "10.0.0.1", 00:14:10.761 "trsvcid": "46018" 00:14:10.761 }, 00:14:10.761 "auth": { 00:14:10.761 "state": "completed", 00:14:10.761 "digest": "sha512", 00:14:10.761 "dhgroup": "ffdhe2048" 00:14:10.761 } 00:14:10.761 } 00:14:10.761 ]' 00:14:10.761 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.020 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.020 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.020 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:11.020 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.020 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.020 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.020 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.278 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:14:11.847 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.847 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:11.847 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.847 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.847 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.847 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:11.847 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.847 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:11.847 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:12.105 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:14:12.105 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.105 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:12.105 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:12.105 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:12.106 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.106 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.106 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.106 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.106 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.106 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.106 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.365 00:14:12.624 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.624 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.625 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.625 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.625 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.625 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.625 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.625 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.625 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.625 { 00:14:12.625 "cntlid": 113, 00:14:12.625 "qid": 0, 00:14:12.625 "state": "enabled", 00:14:12.625 "thread": "nvmf_tgt_poll_group_000", 00:14:12.625 "listen_address": { 00:14:12.625 "trtype": "TCP", 00:14:12.625 "adrfam": "IPv4", 00:14:12.625 "traddr": "10.0.0.2", 00:14:12.625 "trsvcid": "4420" 00:14:12.625 }, 00:14:12.625 "peer_address": { 00:14:12.625 "trtype": "TCP", 00:14:12.625 "adrfam": "IPv4", 00:14:12.625 "traddr": "10.0.0.1", 00:14:12.625 "trsvcid": "46030" 00:14:12.625 }, 00:14:12.625 "auth": { 00:14:12.625 "state": "completed", 00:14:12.625 "digest": "sha512", 00:14:12.625 "dhgroup": "ffdhe3072" 00:14:12.625 } 00:14:12.625 } 00:14:12.625 ]' 00:14:12.625 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.883 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:12.883 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.883 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:12.883 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.883 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.883 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.884 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.143 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:14:13.726 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.726 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:13.726 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.726 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.726 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.726 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.726 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:13.726 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:14.001 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:14:14.001 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.001 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:14.001 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:14.001 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:14.001 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.001 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.001 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.001 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.001 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.001 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.001 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.258 00:14:14.258 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.258 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.258 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.515 { 00:14:14.515 "cntlid": 115, 00:14:14.515 "qid": 0, 00:14:14.515 "state": "enabled", 00:14:14.515 "thread": "nvmf_tgt_poll_group_000", 00:14:14.515 "listen_address": { 00:14:14.515 "trtype": "TCP", 00:14:14.515 "adrfam": "IPv4", 00:14:14.515 "traddr": "10.0.0.2", 00:14:14.515 "trsvcid": "4420" 00:14:14.515 }, 00:14:14.515 "peer_address": { 00:14:14.515 "trtype": "TCP", 00:14:14.515 "adrfam": "IPv4", 00:14:14.515 "traddr": "10.0.0.1", 00:14:14.515 "trsvcid": "39872" 00:14:14.515 }, 00:14:14.515 "auth": { 00:14:14.515 "state": "completed", 00:14:14.515 "digest": "sha512", 00:14:14.515 "dhgroup": "ffdhe3072" 00:14:14.515 } 00:14:14.515 } 00:14:14.515 ]' 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.515 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.775 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:14:15.344 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.344 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:15.344 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.344 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.344 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.344 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.344 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:15.344 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:15.604 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:14:15.604 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.604 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:15.604 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:15.604 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:15.604 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.604 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.604 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.604 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.604 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.604 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.604 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.864 00:14:15.864 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.864 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.864 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.124 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.124 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.124 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.124 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.124 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.124 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.124 { 00:14:16.124 "cntlid": 117, 00:14:16.124 "qid": 0, 00:14:16.124 "state": "enabled", 00:14:16.124 "thread": "nvmf_tgt_poll_group_000", 00:14:16.124 "listen_address": { 00:14:16.124 "trtype": "TCP", 00:14:16.124 "adrfam": "IPv4", 00:14:16.124 "traddr": "10.0.0.2", 00:14:16.124 "trsvcid": "4420" 00:14:16.124 }, 00:14:16.124 "peer_address": { 00:14:16.124 "trtype": "TCP", 00:14:16.124 "adrfam": "IPv4", 00:14:16.124 "traddr": "10.0.0.1", 00:14:16.124 "trsvcid": "39904" 00:14:16.124 }, 00:14:16.124 "auth": { 00:14:16.124 "state": "completed", 00:14:16.124 "digest": "sha512", 00:14:16.124 "dhgroup": "ffdhe3072" 00:14:16.124 } 00:14:16.124 } 00:14:16.124 ]' 00:14:16.124 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.124 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:16.124 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.383 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:16.383 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.383 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.383 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.383 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.642 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:17.213 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:17.802 00:14:17.802 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.802 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.802 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.802 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.802 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.802 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.802 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.802 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.802 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.802 { 00:14:17.802 "cntlid": 119, 00:14:17.802 "qid": 0, 00:14:17.802 "state": "enabled", 00:14:17.802 "thread": "nvmf_tgt_poll_group_000", 00:14:17.802 "listen_address": { 00:14:17.802 "trtype": "TCP", 00:14:17.802 "adrfam": "IPv4", 00:14:17.802 "traddr": "10.0.0.2", 00:14:17.802 "trsvcid": "4420" 00:14:17.802 }, 00:14:17.802 "peer_address": { 00:14:17.802 "trtype": "TCP", 00:14:17.802 "adrfam": "IPv4", 00:14:17.802 "traddr": "10.0.0.1", 00:14:17.802 "trsvcid": "39920" 00:14:17.802 }, 00:14:17.802 "auth": { 00:14:17.802 "state": "completed", 00:14:17.802 "digest": "sha512", 00:14:17.802 "dhgroup": "ffdhe3072" 00:14:17.802 } 00:14:17.802 } 00:14:17.802 ]' 00:14:17.802 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.070 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:18.070 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.070 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:18.070 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.070 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.070 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.070 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.329 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:14:18.898 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.898 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:18.898 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.898 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.898 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.898 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:18.898 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.898 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:18.898 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:19.157 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:14:19.157 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.157 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:19.157 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:19.157 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:19.157 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.157 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.157 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.157 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.157 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.157 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.157 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.417 00:14:19.417 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.417 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.417 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.676 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.676 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.676 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.676 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.676 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.676 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.676 { 00:14:19.676 "cntlid": 121, 00:14:19.676 "qid": 0, 00:14:19.676 "state": "enabled", 00:14:19.676 "thread": "nvmf_tgt_poll_group_000", 00:14:19.676 "listen_address": { 00:14:19.676 "trtype": "TCP", 00:14:19.676 "adrfam": "IPv4", 00:14:19.676 "traddr": "10.0.0.2", 00:14:19.676 "trsvcid": "4420" 00:14:19.676 }, 00:14:19.676 "peer_address": { 00:14:19.676 "trtype": "TCP", 00:14:19.676 "adrfam": "IPv4", 00:14:19.676 "traddr": "10.0.0.1", 00:14:19.676 "trsvcid": "39946" 00:14:19.676 }, 00:14:19.676 "auth": { 00:14:19.676 "state": "completed", 00:14:19.676 "digest": "sha512", 00:14:19.676 "dhgroup": "ffdhe4096" 00:14:19.676 } 00:14:19.676 } 00:14:19.676 ]' 00:14:19.676 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.676 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.676 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.676 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:19.676 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.935 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.935 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.935 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.935 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:14:20.872 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.872 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:20.872 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.872 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.872 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.872 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.872 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:20.872 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:20.872 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:14:20.872 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.872 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:20.872 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:20.872 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:20.872 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.872 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.872 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.872 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.872 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.872 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.872 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.438 00:14:21.438 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.438 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.438 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.704 { 00:14:21.704 "cntlid": 123, 00:14:21.704 "qid": 0, 00:14:21.704 "state": "enabled", 00:14:21.704 "thread": "nvmf_tgt_poll_group_000", 00:14:21.704 "listen_address": { 00:14:21.704 "trtype": "TCP", 00:14:21.704 "adrfam": "IPv4", 00:14:21.704 "traddr": "10.0.0.2", 00:14:21.704 "trsvcid": "4420" 00:14:21.704 }, 00:14:21.704 "peer_address": { 00:14:21.704 "trtype": "TCP", 00:14:21.704 "adrfam": "IPv4", 00:14:21.704 "traddr": "10.0.0.1", 00:14:21.704 "trsvcid": "39966" 00:14:21.704 }, 00:14:21.704 "auth": { 00:14:21.704 "state": "completed", 00:14:21.704 "digest": "sha512", 00:14:21.704 "dhgroup": "ffdhe4096" 00:14:21.704 } 00:14:21.704 } 00:14:21.704 ]' 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.704 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.963 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:14:22.900 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.900 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:22.900 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.900 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.900 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.900 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.900 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:22.900 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:22.900 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:14:22.900 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.900 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:22.900 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:22.900 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:22.900 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.900 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.900 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.900 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.900 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.900 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.900 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.160 00:14:23.160 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.160 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.160 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.421 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.421 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.421 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.421 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.421 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.421 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.421 { 00:14:23.421 "cntlid": 125, 00:14:23.421 "qid": 0, 00:14:23.421 "state": "enabled", 00:14:23.421 "thread": "nvmf_tgt_poll_group_000", 00:14:23.421 "listen_address": { 00:14:23.421 "trtype": "TCP", 00:14:23.421 "adrfam": "IPv4", 00:14:23.421 "traddr": "10.0.0.2", 00:14:23.421 "trsvcid": "4420" 00:14:23.421 }, 00:14:23.421 "peer_address": { 00:14:23.421 "trtype": "TCP", 00:14:23.421 "adrfam": "IPv4", 00:14:23.421 "traddr": "10.0.0.1", 00:14:23.421 "trsvcid": "39984" 00:14:23.421 }, 00:14:23.421 "auth": { 00:14:23.421 "state": "completed", 00:14:23.421 "digest": "sha512", 00:14:23.421 "dhgroup": "ffdhe4096" 00:14:23.421 } 00:14:23.421 } 00:14:23.421 ]' 00:14:23.421 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.681 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:23.681 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.681 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:23.681 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.681 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.681 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.681 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.940 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:14:24.512 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.512 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:24.512 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.512 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.512 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.512 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.512 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:24.512 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:24.779 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:14:24.779 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.779 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:24.779 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:24.779 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:24.779 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.779 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:14:24.779 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.779 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.779 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.779 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.779 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:25.037 00:14:25.037 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.037 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.037 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.604 { 00:14:25.604 "cntlid": 127, 00:14:25.604 "qid": 0, 00:14:25.604 "state": "enabled", 00:14:25.604 "thread": "nvmf_tgt_poll_group_000", 00:14:25.604 "listen_address": { 00:14:25.604 "trtype": "TCP", 00:14:25.604 "adrfam": "IPv4", 00:14:25.604 "traddr": "10.0.0.2", 00:14:25.604 "trsvcid": "4420" 00:14:25.604 }, 00:14:25.604 "peer_address": { 00:14:25.604 "trtype": "TCP", 00:14:25.604 "adrfam": "IPv4", 00:14:25.604 "traddr": "10.0.0.1", 00:14:25.604 "trsvcid": "46834" 00:14:25.604 }, 00:14:25.604 "auth": { 00:14:25.604 "state": "completed", 00:14:25.604 "digest": "sha512", 00:14:25.604 "dhgroup": "ffdhe4096" 00:14:25.604 } 00:14:25.604 } 00:14:25.604 ]' 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.604 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.863 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:14:26.429 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.429 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:26.429 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.429 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.429 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.429 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:26.429 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.429 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:26.429 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:26.688 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:14:26.688 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.688 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:26.688 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:26.688 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:26.688 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.688 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.688 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.688 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.688 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.688 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.688 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.255 00:14:27.255 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.255 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.255 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.514 { 00:14:27.514 "cntlid": 129, 00:14:27.514 "qid": 0, 00:14:27.514 "state": "enabled", 00:14:27.514 "thread": "nvmf_tgt_poll_group_000", 00:14:27.514 "listen_address": { 00:14:27.514 "trtype": "TCP", 00:14:27.514 "adrfam": "IPv4", 00:14:27.514 "traddr": "10.0.0.2", 00:14:27.514 "trsvcid": "4420" 00:14:27.514 }, 00:14:27.514 "peer_address": { 00:14:27.514 "trtype": "TCP", 00:14:27.514 "adrfam": "IPv4", 00:14:27.514 "traddr": "10.0.0.1", 00:14:27.514 "trsvcid": "46858" 00:14:27.514 }, 00:14:27.514 "auth": { 00:14:27.514 "state": "completed", 00:14:27.514 "digest": "sha512", 00:14:27.514 "dhgroup": "ffdhe6144" 00:14:27.514 } 00:14:27.514 } 00:14:27.514 ]' 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.514 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.773 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.710 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.279 00:14:29.279 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.279 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.279 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.539 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.539 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.539 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.539 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.539 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.539 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.539 { 00:14:29.539 "cntlid": 131, 00:14:29.539 "qid": 0, 00:14:29.539 "state": "enabled", 00:14:29.539 "thread": "nvmf_tgt_poll_group_000", 00:14:29.539 "listen_address": { 00:14:29.539 "trtype": "TCP", 00:14:29.539 "adrfam": "IPv4", 00:14:29.539 "traddr": "10.0.0.2", 00:14:29.539 "trsvcid": "4420" 00:14:29.539 }, 00:14:29.539 "peer_address": { 00:14:29.540 "trtype": "TCP", 00:14:29.540 "adrfam": "IPv4", 00:14:29.540 "traddr": "10.0.0.1", 00:14:29.540 "trsvcid": "46878" 00:14:29.540 }, 00:14:29.540 "auth": { 00:14:29.540 "state": "completed", 00:14:29.540 "digest": "sha512", 00:14:29.540 "dhgroup": "ffdhe6144" 00:14:29.540 } 00:14:29.540 } 00:14:29.540 ]' 00:14:29.540 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.540 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.540 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.540 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:29.540 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.540 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.540 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.540 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.799 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:14:30.370 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.370 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:30.370 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.370 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.370 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.370 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.370 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:30.370 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:30.630 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:14:30.630 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.630 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:30.630 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:30.630 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:30.630 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.630 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.630 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.630 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.630 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.630 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.631 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.198 00:14:31.198 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.198 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.198 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.457 { 00:14:31.457 "cntlid": 133, 00:14:31.457 "qid": 0, 00:14:31.457 "state": "enabled", 00:14:31.457 "thread": "nvmf_tgt_poll_group_000", 00:14:31.457 "listen_address": { 00:14:31.457 "trtype": "TCP", 00:14:31.457 "adrfam": "IPv4", 00:14:31.457 "traddr": "10.0.0.2", 00:14:31.457 "trsvcid": "4420" 00:14:31.457 }, 00:14:31.457 "peer_address": { 00:14:31.457 "trtype": "TCP", 00:14:31.457 "adrfam": "IPv4", 00:14:31.457 "traddr": "10.0.0.1", 00:14:31.457 "trsvcid": "46910" 00:14:31.457 }, 00:14:31.457 "auth": { 00:14:31.457 "state": "completed", 00:14:31.457 "digest": "sha512", 00:14:31.457 "dhgroup": "ffdhe6144" 00:14:31.457 } 00:14:31.457 } 00:14:31.457 ]' 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.457 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.715 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:14:32.287 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.287 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:32.287 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.287 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.287 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.287 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.287 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:32.287 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:32.557 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:14:32.557 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.557 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:32.557 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:32.557 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:32.557 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.557 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:14:32.557 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.557 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.557 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.557 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.557 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:33.124 00:14:33.124 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.124 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.124 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.383 { 00:14:33.383 "cntlid": 135, 00:14:33.383 "qid": 0, 00:14:33.383 "state": "enabled", 00:14:33.383 "thread": "nvmf_tgt_poll_group_000", 00:14:33.383 "listen_address": { 00:14:33.383 "trtype": "TCP", 00:14:33.383 "adrfam": "IPv4", 00:14:33.383 "traddr": "10.0.0.2", 00:14:33.383 "trsvcid": "4420" 00:14:33.383 }, 00:14:33.383 "peer_address": { 00:14:33.383 "trtype": "TCP", 00:14:33.383 "adrfam": "IPv4", 00:14:33.383 "traddr": "10.0.0.1", 00:14:33.383 "trsvcid": "46928" 00:14:33.383 }, 00:14:33.383 "auth": { 00:14:33.383 "state": "completed", 00:14:33.383 "digest": "sha512", 00:14:33.383 "dhgroup": "ffdhe6144" 00:14:33.383 } 00:14:33.383 } 00:14:33.383 ]' 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.383 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.642 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.585 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.153 00:14:35.153 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.153 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.153 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.412 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.412 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.412 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.412 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.412 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.412 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.412 { 00:14:35.412 "cntlid": 137, 00:14:35.412 "qid": 0, 00:14:35.412 "state": "enabled", 00:14:35.412 "thread": "nvmf_tgt_poll_group_000", 00:14:35.412 "listen_address": { 00:14:35.412 "trtype": "TCP", 00:14:35.412 "adrfam": "IPv4", 00:14:35.412 "traddr": "10.0.0.2", 00:14:35.412 "trsvcid": "4420" 00:14:35.412 }, 00:14:35.412 "peer_address": { 00:14:35.412 "trtype": "TCP", 00:14:35.412 "adrfam": "IPv4", 00:14:35.412 "traddr": "10.0.0.1", 00:14:35.412 "trsvcid": "34120" 00:14:35.412 }, 00:14:35.412 "auth": { 00:14:35.412 "state": "completed", 00:14:35.412 "digest": "sha512", 00:14:35.412 "dhgroup": "ffdhe8192" 00:14:35.412 } 00:14:35.412 } 00:14:35.412 ]' 00:14:35.412 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.671 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:35.671 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.671 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:35.671 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.671 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.671 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.671 14:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.930 14:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:14:36.506 14:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.506 14:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:36.506 14:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.506 14:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.506 14:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.506 14:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.506 14:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:36.506 14:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:36.765 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:14:36.765 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.765 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:36.765 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:36.765 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:36.765 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.765 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.765 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.765 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.765 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.765 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.765 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.333 00:14:37.592 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.592 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.592 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.851 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.851 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.851 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.851 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.851 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.851 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.851 { 00:14:37.851 "cntlid": 139, 00:14:37.851 "qid": 0, 00:14:37.851 "state": "enabled", 00:14:37.851 "thread": "nvmf_tgt_poll_group_000", 00:14:37.851 "listen_address": { 00:14:37.851 "trtype": "TCP", 00:14:37.851 "adrfam": "IPv4", 00:14:37.851 "traddr": "10.0.0.2", 00:14:37.851 "trsvcid": "4420" 00:14:37.851 }, 00:14:37.851 "peer_address": { 00:14:37.851 "trtype": "TCP", 00:14:37.852 "adrfam": "IPv4", 00:14:37.852 "traddr": "10.0.0.1", 00:14:37.852 "trsvcid": "34146" 00:14:37.852 }, 00:14:37.852 "auth": { 00:14:37.852 "state": "completed", 00:14:37.852 "digest": "sha512", 00:14:37.852 "dhgroup": "ffdhe8192" 00:14:37.852 } 00:14:37.852 } 00:14:37.852 ]' 00:14:37.852 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.852 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:37.852 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.852 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:37.852 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.852 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.852 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.852 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.111 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:01:NDViYjI0NDQ3YWU0M2Q2OGQ4MDEzMjI3NWRiZTQxNjSbzC2l: --dhchap-ctrl-secret DHHC-1:02:Nzc1MmQzZjYyYzQ0YjIwMjBiY2Q5Zjc3ZDRlYjM2YTQyNmMzOTY2OTZiYWFmYTZjlmGSaw==: 00:14:39.096 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.096 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:39.096 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.096 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.096 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.096 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.096 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:39.096 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:39.096 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:14:39.096 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.097 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:39.097 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:39.097 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:39.097 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.097 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.097 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.097 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.097 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.097 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.097 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.665 00:14:39.665 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.665 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.665 14:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.924 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.924 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.924 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.924 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.183 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.183 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.183 { 00:14:40.183 "cntlid": 141, 00:14:40.183 "qid": 0, 00:14:40.183 "state": "enabled", 00:14:40.183 "thread": "nvmf_tgt_poll_group_000", 00:14:40.183 "listen_address": { 00:14:40.183 "trtype": "TCP", 00:14:40.183 "adrfam": "IPv4", 00:14:40.183 "traddr": "10.0.0.2", 00:14:40.183 "trsvcid": "4420" 00:14:40.183 }, 00:14:40.183 "peer_address": { 00:14:40.183 "trtype": "TCP", 00:14:40.183 "adrfam": "IPv4", 00:14:40.183 "traddr": "10.0.0.1", 00:14:40.183 "trsvcid": "34166" 00:14:40.183 }, 00:14:40.183 "auth": { 00:14:40.183 "state": "completed", 00:14:40.183 "digest": "sha512", 00:14:40.183 "dhgroup": "ffdhe8192" 00:14:40.183 } 00:14:40.183 } 00:14:40.183 ]' 00:14:40.183 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.183 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.183 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.183 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:40.183 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.183 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.183 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.183 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.443 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:02:YzI0NDRkYzkwMTdiZmEyNTM1Njc1OTgzNDM3MGY3NWZiNDAzNGZiOGQ5NGExYzhmJCAiGA==: --dhchap-ctrl-secret DHHC-1:01:ODlmMDZiODNhMWY5YjZiMjJjMTkzZDZkMTMyMWY4MGaiAp3A: 00:14:41.011 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.270 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.530 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.530 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:41.530 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:42.095 00:14:42.095 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.095 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.095 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.353 { 00:14:42.353 "cntlid": 143, 00:14:42.353 "qid": 0, 00:14:42.353 "state": "enabled", 00:14:42.353 "thread": "nvmf_tgt_poll_group_000", 00:14:42.353 "listen_address": { 00:14:42.353 "trtype": "TCP", 00:14:42.353 "adrfam": "IPv4", 00:14:42.353 "traddr": "10.0.0.2", 00:14:42.353 "trsvcid": "4420" 00:14:42.353 }, 00:14:42.353 "peer_address": { 00:14:42.353 "trtype": "TCP", 00:14:42.353 "adrfam": "IPv4", 00:14:42.353 "traddr": "10.0.0.1", 00:14:42.353 "trsvcid": "34190" 00:14:42.353 }, 00:14:42.353 "auth": { 00:14:42.353 "state": "completed", 00:14:42.353 "digest": "sha512", 00:14:42.353 "dhgroup": "ffdhe8192" 00:14:42.353 } 00:14:42.353 } 00:14:42.353 ]' 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.353 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.611 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.549 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.122 00:14:44.382 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.382 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.382 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.642 { 00:14:44.642 "cntlid": 145, 00:14:44.642 "qid": 0, 00:14:44.642 "state": "enabled", 00:14:44.642 "thread": "nvmf_tgt_poll_group_000", 00:14:44.642 "listen_address": { 00:14:44.642 "trtype": "TCP", 00:14:44.642 "adrfam": "IPv4", 00:14:44.642 "traddr": "10.0.0.2", 00:14:44.642 "trsvcid": "4420" 00:14:44.642 }, 00:14:44.642 "peer_address": { 00:14:44.642 "trtype": "TCP", 00:14:44.642 "adrfam": "IPv4", 00:14:44.642 "traddr": "10.0.0.1", 00:14:44.642 "trsvcid": "54570" 00:14:44.642 }, 00:14:44.642 "auth": { 00:14:44.642 "state": "completed", 00:14:44.642 "digest": "sha512", 00:14:44.642 "dhgroup": "ffdhe8192" 00:14:44.642 } 00:14:44.642 } 00:14:44.642 ]' 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.642 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.901 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:00:M2I3YWY1ZWI3MGU5ZTAzNzI1MTU1YmM0ZGE3OWIzNmE2YmQ3MWU0NDViODU1OTVjh1puEA==: --dhchap-ctrl-secret DHHC-1:03:NWViZjQyN2IzMjUyMzBhN2Y2NmY2YzczZjgyY2I5ZTYyZWFkMTc1YmExMTJlYmNkMjI3YTg0MzY0OWQyMDYyMU1x0Xg=: 00:14:45.469 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.469 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:45.469 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.469 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:45.728 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:46.296 request: 00:14:46.296 { 00:14:46.296 "name": "nvme0", 00:14:46.296 "trtype": "tcp", 00:14:46.296 "traddr": "10.0.0.2", 00:14:46.296 "adrfam": "ipv4", 00:14:46.296 "trsvcid": "4420", 00:14:46.296 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:46.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12", 00:14:46.296 "prchk_reftag": false, 00:14:46.296 "prchk_guard": false, 00:14:46.296 "hdgst": false, 00:14:46.296 "ddgst": false, 00:14:46.296 "dhchap_key": "key2", 00:14:46.296 "method": "bdev_nvme_attach_controller", 00:14:46.296 "req_id": 1 00:14:46.296 } 00:14:46.296 Got JSON-RPC error response 00:14:46.296 response: 00:14:46.296 { 00:14:46.296 "code": -5, 00:14:46.296 "message": "Input/output error" 00:14:46.296 } 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:46.296 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:46.297 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:46.297 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:46.297 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:46.865 request: 00:14:46.865 { 00:14:46.865 "name": "nvme0", 00:14:46.865 "trtype": "tcp", 00:14:46.865 "traddr": "10.0.0.2", 00:14:46.865 "adrfam": "ipv4", 00:14:46.865 "trsvcid": "4420", 00:14:46.865 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:46.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12", 00:14:46.865 "prchk_reftag": false, 00:14:46.866 "prchk_guard": false, 00:14:46.866 "hdgst": false, 00:14:46.866 "ddgst": false, 00:14:46.866 "dhchap_key": "key1", 00:14:46.866 "dhchap_ctrlr_key": "ckey2", 00:14:46.866 "method": "bdev_nvme_attach_controller", 00:14:46.866 "req_id": 1 00:14:46.866 } 00:14:46.866 Got JSON-RPC error response 00:14:46.866 response: 00:14:46.866 { 00:14:46.866 "code": -5, 00:14:46.866 "message": "Input/output error" 00:14:46.866 } 00:14:46.866 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:46.866 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:46.866 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:46.866 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:46.866 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:46.866 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.866 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key1 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.866 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.435 request: 00:14:47.435 { 00:14:47.435 "name": "nvme0", 00:14:47.435 "trtype": "tcp", 00:14:47.435 "traddr": "10.0.0.2", 00:14:47.435 "adrfam": "ipv4", 00:14:47.435 "trsvcid": "4420", 00:14:47.435 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:47.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12", 00:14:47.435 "prchk_reftag": false, 00:14:47.435 "prchk_guard": false, 00:14:47.435 "hdgst": false, 00:14:47.435 "ddgst": false, 00:14:47.435 "dhchap_key": "key1", 00:14:47.435 "dhchap_ctrlr_key": "ckey1", 00:14:47.435 "method": "bdev_nvme_attach_controller", 00:14:47.435 "req_id": 1 00:14:47.435 } 00:14:47.435 Got JSON-RPC error response 00:14:47.435 response: 00:14:47.435 { 00:14:47.435 "code": -5, 00:14:47.435 "message": "Input/output error" 00:14:47.435 } 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 68604 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68604 ']' 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68604 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68604 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68604' 00:14:47.435 killing process with pid 68604 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68604 00:14:47.435 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68604 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=71493 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 71493 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71493 ']' 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:47.695 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 71493 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71493 ']' 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.635 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.895 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.895 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:48.895 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:14:48.895 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.895 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.155 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.723 00:14:49.723 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.723 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.723 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.982 { 00:14:49.982 "cntlid": 1, 00:14:49.982 "qid": 0, 00:14:49.982 "state": "enabled", 00:14:49.982 "thread": "nvmf_tgt_poll_group_000", 00:14:49.982 "listen_address": { 00:14:49.982 "trtype": "TCP", 00:14:49.982 "adrfam": "IPv4", 00:14:49.982 "traddr": "10.0.0.2", 00:14:49.982 "trsvcid": "4420" 00:14:49.982 }, 00:14:49.982 "peer_address": { 00:14:49.982 "trtype": "TCP", 00:14:49.982 "adrfam": "IPv4", 00:14:49.982 "traddr": "10.0.0.1", 00:14:49.982 "trsvcid": "54638" 00:14:49.982 }, 00:14:49.982 "auth": { 00:14:49.982 "state": "completed", 00:14:49.982 "digest": "sha512", 00:14:49.982 "dhgroup": "ffdhe8192" 00:14:49.982 } 00:14:49.982 } 00:14:49.982 ]' 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.982 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.240 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-secret DHHC-1:03:ZmY1ZjdiYjE4OTk2MDk1ZWM4MDU1MDJhN2VkYWMzYjQ0YmZmNjc5Njc5M2VjYzZkYjY2NTkxZWIxOThhMDRjMLAX63w=: 00:14:50.807 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --dhchap-key key3 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.066 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.325 request: 00:14:51.325 { 00:14:51.325 "name": "nvme0", 00:14:51.325 "trtype": "tcp", 00:14:51.325 "traddr": "10.0.0.2", 00:14:51.325 "adrfam": "ipv4", 00:14:51.325 "trsvcid": "4420", 00:14:51.325 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:51.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12", 00:14:51.325 "prchk_reftag": false, 00:14:51.325 "prchk_guard": false, 00:14:51.325 "hdgst": false, 00:14:51.325 "ddgst": false, 00:14:51.325 "dhchap_key": "key3", 00:14:51.325 "method": "bdev_nvme_attach_controller", 00:14:51.325 "req_id": 1 00:14:51.325 } 00:14:51.325 Got JSON-RPC error response 00:14:51.325 response: 00:14:51.325 { 00:14:51.325 "code": -5, 00:14:51.325 "message": "Input/output error" 00:14:51.325 } 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:51.583 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.584 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.584 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.842 request: 00:14:51.842 { 00:14:51.842 "name": "nvme0", 00:14:51.842 "trtype": "tcp", 00:14:51.842 "traddr": "10.0.0.2", 00:14:51.842 "adrfam": "ipv4", 00:14:51.842 "trsvcid": "4420", 00:14:51.842 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:51.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12", 00:14:51.842 "prchk_reftag": false, 00:14:51.842 "prchk_guard": false, 00:14:51.842 "hdgst": false, 00:14:51.842 "ddgst": false, 00:14:51.842 "dhchap_key": "key3", 00:14:51.842 "method": "bdev_nvme_attach_controller", 00:14:51.842 "req_id": 1 00:14:51.842 } 00:14:51.842 Got JSON-RPC error response 00:14:51.842 response: 00:14:51.842 { 00:14:51.842 "code": -5, 00:14:51.842 "message": "Input/output error" 00:14:51.842 } 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:52.101 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:52.360 request: 00:14:52.360 { 00:14:52.360 "name": "nvme0", 00:14:52.360 "trtype": "tcp", 00:14:52.360 "traddr": "10.0.0.2", 00:14:52.360 "adrfam": "ipv4", 00:14:52.360 "trsvcid": "4420", 00:14:52.360 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:52.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12", 00:14:52.360 "prchk_reftag": false, 00:14:52.360 "prchk_guard": false, 00:14:52.360 "hdgst": false, 00:14:52.360 "ddgst": false, 00:14:52.360 "dhchap_key": "key0", 00:14:52.360 "dhchap_ctrlr_key": "key1", 00:14:52.360 "method": "bdev_nvme_attach_controller", 00:14:52.360 "req_id": 1 00:14:52.360 } 00:14:52.360 Got JSON-RPC error response 00:14:52.360 response: 00:14:52.360 { 00:14:52.360 "code": -5, 00:14:52.360 "message": "Input/output error" 00:14:52.360 } 00:14:52.360 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:52.360 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:52.360 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:52.360 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:52.360 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:52.360 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:52.619 00:14:52.619 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:14:52.619 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:14:52.619 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.877 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.877 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.877 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68636 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68636 ']' 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68636 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68636 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68636' 00:14:53.135 killing process with pid 68636 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68636 00:14:53.135 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68636 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.705 rmmod nvme_tcp 00:14:53.705 rmmod nvme_fabrics 00:14:53.705 rmmod nvme_keyring 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 71493 ']' 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 71493 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 71493 ']' 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 71493 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71493 00:14:53.705 killing process with pid 71493 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71493' 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 71493 00:14:53.705 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 71493 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.EcS /tmp/spdk.key-sha256.Ohd /tmp/spdk.key-sha384.DIJ /tmp/spdk.key-sha512.Gov /tmp/spdk.key-sha512.CZ9 /tmp/spdk.key-sha384.4tM /tmp/spdk.key-sha256.fRJ '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:53.965 00:14:53.965 real 2m33.832s 00:14:53.965 user 6m5.078s 00:14:53.965 sys 0m23.396s 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.965 ************************************ 00:14:53.965 END TEST nvmf_auth_target 00:14:53.965 ************************************ 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:53.965 14:03:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:53.966 14:03:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.966 14:03:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:53.966 ************************************ 00:14:53.966 START TEST nvmf_bdevio_no_huge 00:14:53.966 ************************************ 00:14:53.966 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:54.226 * Looking for test storage... 00:14:54.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:54.226 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:54.226 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:54.226 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.226 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:54.227 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:54.228 Cannot find device "nvmf_tgt_br" 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:54.228 Cannot find device "nvmf_tgt_br2" 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:54.228 Cannot find device "nvmf_tgt_br" 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:54.228 Cannot find device "nvmf_tgt_br2" 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:54.228 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:54.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:14:54.489 00:14:54.489 --- 10.0.0.2 ping statistics --- 00:14:54.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.489 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:14:54.489 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:54.489 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:54.489 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:54.489 00:14:54.489 --- 10.0.0.3 ping statistics --- 00:14:54.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.490 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:54.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:54.490 00:14:54.490 --- 10.0.0.1 ping statistics --- 00:14:54.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.490 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:54.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=71795 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 71795 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71795 ']' 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:54.490 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:54.490 [2024-07-25 14:03:03.786780] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:14:54.490 [2024-07-25 14:03:03.786955] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:54.749 [2024-07-25 14:03:03.926086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.008 [2024-07-25 14:03:04.055696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.008 [2024-07-25 14:03:04.055760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.008 [2024-07-25 14:03:04.055768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.008 [2024-07-25 14:03:04.055774] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.008 [2024-07-25 14:03:04.055779] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.008 [2024-07-25 14:03:04.055862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:55.008 [2024-07-25 14:03:04.056547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.008 [2024-07-25 14:03:04.056422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:55.008 [2024-07-25 14:03:04.056543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:55.008 [2024-07-25 14:03:04.060629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:55.577 [2024-07-25 14:03:04.760491] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:55.577 Malloc0 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.577 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:55.578 [2024-07-25 14:03:04.798860] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:55.578 { 00:14:55.578 "params": { 00:14:55.578 "name": "Nvme$subsystem", 00:14:55.578 "trtype": "$TEST_TRANSPORT", 00:14:55.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:55.578 "adrfam": "ipv4", 00:14:55.578 "trsvcid": "$NVMF_PORT", 00:14:55.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:55.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:55.578 "hdgst": ${hdgst:-false}, 00:14:55.578 "ddgst": ${ddgst:-false} 00:14:55.578 }, 00:14:55.578 "method": "bdev_nvme_attach_controller" 00:14:55.578 } 00:14:55.578 EOF 00:14:55.578 )") 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:55.578 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:55.578 "params": { 00:14:55.578 "name": "Nvme1", 00:14:55.578 "trtype": "tcp", 00:14:55.578 "traddr": "10.0.0.2", 00:14:55.578 "adrfam": "ipv4", 00:14:55.578 "trsvcid": "4420", 00:14:55.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.578 "hdgst": false, 00:14:55.578 "ddgst": false 00:14:55.578 }, 00:14:55.578 "method": "bdev_nvme_attach_controller" 00:14:55.578 }' 00:14:55.578 [2024-07-25 14:03:04.851662] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:14:55.578 [2024-07-25 14:03:04.851896] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71834 ] 00:14:55.848 [2024-07-25 14:03:04.997815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:55.848 [2024-07-25 14:03:05.115952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.848 [2024-07-25 14:03:05.116006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.848 [2024-07-25 14:03:05.116010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.848 [2024-07-25 14:03:05.128898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:56.110 I/O targets: 00:14:56.110 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:56.110 00:14:56.110 00:14:56.110 CUnit - A unit testing framework for C - Version 2.1-3 00:14:56.110 http://cunit.sourceforge.net/ 00:14:56.110 00:14:56.110 00:14:56.110 Suite: bdevio tests on: Nvme1n1 00:14:56.110 Test: blockdev write read block ...passed 00:14:56.110 Test: blockdev write zeroes read block ...passed 00:14:56.110 Test: blockdev write zeroes read no split ...passed 00:14:56.110 Test: blockdev write zeroes read split ...passed 00:14:56.110 Test: blockdev write zeroes read split partial ...passed 00:14:56.110 Test: blockdev reset ...[2024-07-25 14:03:05.314475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:56.110 [2024-07-25 14:03:05.314598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb25870 (9): Bad file descriptor 00:14:56.110 passed 00:14:56.110 Test: blockdev write read 8 blocks ...[2024-07-25 14:03:05.330441] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:56.110 passed 00:14:56.110 Test: blockdev write read size > 128k ...passed 00:14:56.110 Test: blockdev write read invalid size ...passed 00:14:56.110 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:56.110 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:56.110 Test: blockdev write read max offset ...passed 00:14:56.110 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:56.110 Test: blockdev writev readv 8 blocks ...passed 00:14:56.110 Test: blockdev writev readv 30 x 1block ...passed 00:14:56.110 Test: blockdev writev readv block ...passed 00:14:56.110 Test: blockdev writev readv size > 128k ...passed 00:14:56.110 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:56.110 Test: blockdev comparev and writev ...[2024-07-25 14:03:05.337247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.110 [2024-07-25 14:03:05.337319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:56.110 [2024-07-25 14:03:05.337341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.110 [2024-07-25 14:03:05.337353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:56.110 [2024-07-25 14:03:05.337632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.110 [2024-07-25 14:03:05.337648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:56.110 [2024-07-25 14:03:05.337665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.110 [2024-07-25 14:03:05.337675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:56.110 [2024-07-25 14:03:05.337927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.110 [2024-07-25 14:03:05.337942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:56.110 [2024-07-25 14:03:05.337959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.110 [2024-07-25 14:03:05.337969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:56.110 [2024-07-25 14:03:05.338215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.110 [2024-07-25 14:03:05.338231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:56.110 [2024-07-25 14:03:05.338247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.110 [2024-07-25 14:03:05.338258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:56.110 passed 00:14:56.110 Test: blockdev nvme passthru rw ...passed 00:14:56.110 Test: blockdev nvme passthru vendor specific ...[2024-07-25 14:03:05.338999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:56.110 [2024-07-25 14:03:05.339034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:56.110 [2024-07-25 14:03:05.339134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:56.110 [2024-07-25 14:03:05.339148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:56.110 [2024-07-25 14:03:05.339247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:56.110 [2024-07-25 14:03:05.339260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:56.110 [2024-07-25 14:03:05.339363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:56.110 passed 00:14:56.110 Test: blockdev nvme admin passthru ...[2024-07-25 14:03:05.339377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:56.110 passed 00:14:56.110 Test: blockdev copy ...passed 00:14:56.110 00:14:56.110 Run Summary: Type Total Ran Passed Failed Inactive 00:14:56.110 suites 1 1 n/a 0 0 00:14:56.110 tests 23 23 23 0 0 00:14:56.110 asserts 152 152 152 0 n/a 00:14:56.110 00:14:56.110 Elapsed time = 0.167 seconds 00:14:56.373 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.373 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.373 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.646 rmmod nvme_tcp 00:14:56.646 rmmod nvme_fabrics 00:14:56.646 rmmod nvme_keyring 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 71795 ']' 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 71795 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71795 ']' 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71795 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71795 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:56.646 killing process with pid 71795 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71795' 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71795 00:14:56.646 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71795 00:14:56.908 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.908 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.908 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.908 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.908 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.908 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.908 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.908 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:57.180 00:14:57.180 real 0m3.065s 00:14:57.180 user 0m9.584s 00:14:57.180 sys 0m1.227s 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:57.180 ************************************ 00:14:57.180 END TEST nvmf_bdevio_no_huge 00:14:57.180 ************************************ 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:57.180 ************************************ 00:14:57.180 START TEST nvmf_tls 00:14:57.180 ************************************ 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:57.180 * Looking for test storage... 00:14:57.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:57.180 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:57.439 Cannot find device "nvmf_tgt_br" 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:57.439 Cannot find device "nvmf_tgt_br2" 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:57.439 Cannot find device "nvmf_tgt_br" 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:57.439 Cannot find device "nvmf_tgt_br2" 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:57.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:57.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:57.439 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:57.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:57.698 00:14:57.698 --- 10.0.0.2 ping statistics --- 00:14:57.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.698 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:57.698 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:57.698 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:14:57.698 00:14:57.698 --- 10.0.0.3 ping statistics --- 00:14:57.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.698 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:57.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:57.698 00:14:57.698 --- 10.0.0.1 ping statistics --- 00:14:57.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.698 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72018 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72018 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72018 ']' 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.698 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.698 [2024-07-25 14:03:06.871075] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:14:57.698 [2024-07-25 14:03:06.871150] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.959 [2024-07-25 14:03:07.004861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.959 [2024-07-25 14:03:07.119349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.959 [2024-07-25 14:03:07.119405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.959 [2024-07-25 14:03:07.119413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.959 [2024-07-25 14:03:07.119419] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.959 [2024-07-25 14:03:07.119424] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.959 [2024-07-25 14:03:07.119450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.532 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.532 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:58.532 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:58.532 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:58.532 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.532 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.532 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:58.532 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:58.791 true 00:14:58.791 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:58.791 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:14:59.049 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:14:59.049 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:59.049 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:59.309 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:14:59.309 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:59.568 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:14:59.568 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:59.568 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:59.827 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:59.827 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:15:00.085 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:15:00.085 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:15:00.085 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:15:00.085 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:00.343 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:15:00.343 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:15:00.343 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:00.613 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:00.613 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:15:00.873 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:15:00.873 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:15:00.873 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:00.873 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:00.873 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:15:01.134 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:15:01.134 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:15:01.134 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:01.134 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:01.134 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:01.134 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:01.134 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:15:01.134 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:01.134 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:01.404 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.rNpE9rjilP 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.LsuX1K0dqx 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.rNpE9rjilP 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.LsuX1K0dqx 00:15:01.405 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:01.663 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:01.950 [2024-07-25 14:03:11.008568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:01.950 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.rNpE9rjilP 00:15:01.950 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rNpE9rjilP 00:15:01.950 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:02.209 [2024-07-25 14:03:11.265799] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.209 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:02.209 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:02.467 [2024-07-25 14:03:11.681084] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:02.467 [2024-07-25 14:03:11.681325] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.467 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:02.725 malloc0 00:15:02.725 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:02.984 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rNpE9rjilP 00:15:03.243 [2024-07-25 14:03:12.368781] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:03.243 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.rNpE9rjilP 00:15:13.251 Initializing NVMe Controllers 00:15:13.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:13.251 Initialization complete. Launching workers. 00:15:13.251 ======================================================== 00:15:13.251 Latency(us) 00:15:13.251 Device Information : IOPS MiB/s Average min max 00:15:13.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13009.87 50.82 4919.85 1005.78 6577.17 00:15:13.251 ======================================================== 00:15:13.251 Total : 13009.87 50.82 4919.85 1005.78 6577.17 00:15:13.251 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rNpE9rjilP 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rNpE9rjilP' 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72249 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72249 /var/tmp/bdevperf.sock 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72249 ']' 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.511 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.511 [2024-07-25 14:03:22.598648] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:13.511 [2024-07-25 14:03:22.598710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72249 ] 00:15:13.511 [2024-07-25 14:03:22.736535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.770 [2024-07-25 14:03:22.822978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.770 [2024-07-25 14:03:22.864215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:14.344 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.344 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:14.344 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rNpE9rjilP 00:15:14.344 [2024-07-25 14:03:23.642062] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:14.344 [2024-07-25 14:03:23.642173] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:14.604 TLSTESTn1 00:15:14.604 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:14.604 Running I/O for 10 seconds... 00:15:24.585 00:15:24.585 Latency(us) 00:15:24.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.586 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:24.586 Verification LBA range: start 0x0 length 0x2000 00:15:24.586 TLSTESTn1 : 10.01 6041.12 23.60 0.00 0.00 21152.15 4865.12 17972.32 00:15:24.586 =================================================================================================================== 00:15:24.586 Total : 6041.12 23.60 0.00 0.00 21152.15 4865.12 17972.32 00:15:24.586 0 00:15:24.586 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:24.586 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72249 00:15:24.586 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72249 ']' 00:15:24.586 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72249 00:15:24.586 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:24.586 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.586 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72249 00:15:24.586 killing process with pid 72249 00:15:24.586 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:24.586 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:24.586 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72249' 00:15:24.586 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72249 00:15:24.586 Received shutdown signal, test time was about 10.000000 seconds 00:15:24.586 00:15:24.586 Latency(us) 00:15:24.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.586 =================================================================================================================== 00:15:24.586 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:24.586 [2024-07-25 14:03:33.885288] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:24.586 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72249 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LsuX1K0dqx 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LsuX1K0dqx 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LsuX1K0dqx 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LsuX1K0dqx' 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72377 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72377 /var/tmp/bdevperf.sock 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72377 ']' 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.845 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.845 [2024-07-25 14:03:34.138613] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:24.845 [2024-07-25 14:03:34.138691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72377 ] 00:15:25.104 [2024-07-25 14:03:34.277564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.104 [2024-07-25 14:03:34.372411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.363 [2024-07-25 14:03:34.413321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:25.931 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.931 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:25.931 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LsuX1K0dqx 00:15:25.931 [2024-07-25 14:03:35.210802] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:25.931 [2024-07-25 14:03:35.210924] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:25.931 [2024-07-25 14:03:35.215651] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:25.931 [2024-07-25 14:03:35.216285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0c1f0 (107): Transport endpoint is not connected 00:15:25.931 [2024-07-25 14:03:35.217271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0c1f0 (9): Bad file descriptor 00:15:25.931 [2024-07-25 14:03:35.218267] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:25.931 [2024-07-25 14:03:35.218291] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:25.931 [2024-07-25 14:03:35.218308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:25.931 request: 00:15:25.931 { 00:15:25.931 "name": "TLSTEST", 00:15:25.931 "trtype": "tcp", 00:15:25.931 "traddr": "10.0.0.2", 00:15:25.931 "adrfam": "ipv4", 00:15:25.931 "trsvcid": "4420", 00:15:25.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.931 "prchk_reftag": false, 00:15:25.931 "prchk_guard": false, 00:15:25.931 "hdgst": false, 00:15:25.931 "ddgst": false, 00:15:25.931 "psk": "/tmp/tmp.LsuX1K0dqx", 00:15:25.931 "method": "bdev_nvme_attach_controller", 00:15:25.931 "req_id": 1 00:15:25.931 } 00:15:25.931 Got JSON-RPC error response 00:15:25.931 response: 00:15:25.931 { 00:15:25.931 "code": -5, 00:15:25.931 "message": "Input/output error" 00:15:25.931 } 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72377 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72377 ']' 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72377 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72377 00:15:26.191 killing process with pid 72377 00:15:26.191 Received shutdown signal, test time was about 10.000000 seconds 00:15:26.191 00:15:26.191 Latency(us) 00:15:26.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.191 =================================================================================================================== 00:15:26.191 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72377' 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72377 00:15:26.191 [2024-07-25 14:03:35.259812] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72377 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rNpE9rjilP 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rNpE9rjilP 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rNpE9rjilP 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:26.191 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rNpE9rjilP' 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72399 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72399 /var/tmp/bdevperf.sock 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72399 ']' 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:26.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.192 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.496 [2024-07-25 14:03:35.499604] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:26.496 [2024-07-25 14:03:35.499686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72399 ] 00:15:26.496 [2024-07-25 14:03:35.640175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.496 [2024-07-25 14:03:35.745894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.496 [2024-07-25 14:03:35.788326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.rNpE9rjilP 00:15:27.450 [2024-07-25 14:03:36.539482] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:27.450 [2024-07-25 14:03:36.539584] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:27.450 [2024-07-25 14:03:36.543944] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:27.450 [2024-07-25 14:03:36.543982] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:27.450 [2024-07-25 14:03:36.544026] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:27.450 [2024-07-25 14:03:36.544724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb241f0 (107): Transport endpoint is not connected 00:15:27.450 [2024-07-25 14:03:36.545710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb241f0 (9): Bad file descriptor 00:15:27.450 [2024-07-25 14:03:36.546706] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:27.450 [2024-07-25 14:03:36.546726] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:27.450 [2024-07-25 14:03:36.546736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:27.450 request: 00:15:27.450 { 00:15:27.450 "name": "TLSTEST", 00:15:27.450 "trtype": "tcp", 00:15:27.450 "traddr": "10.0.0.2", 00:15:27.450 "adrfam": "ipv4", 00:15:27.450 "trsvcid": "4420", 00:15:27.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.450 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:27.450 "prchk_reftag": false, 00:15:27.450 "prchk_guard": false, 00:15:27.450 "hdgst": false, 00:15:27.450 "ddgst": false, 00:15:27.450 "psk": "/tmp/tmp.rNpE9rjilP", 00:15:27.450 "method": "bdev_nvme_attach_controller", 00:15:27.450 "req_id": 1 00:15:27.450 } 00:15:27.450 Got JSON-RPC error response 00:15:27.450 response: 00:15:27.450 { 00:15:27.450 "code": -5, 00:15:27.450 "message": "Input/output error" 00:15:27.450 } 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72399 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72399 ']' 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72399 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72399 00:15:27.450 killing process with pid 72399 00:15:27.450 Received shutdown signal, test time was about 10.000000 seconds 00:15:27.450 00:15:27.450 Latency(us) 00:15:27.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.450 =================================================================================================================== 00:15:27.450 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72399' 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72399 00:15:27.450 [2024-07-25 14:03:36.598486] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:27.450 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72399 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rNpE9rjilP 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rNpE9rjilP 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rNpE9rjilP 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rNpE9rjilP' 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72427 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72427 /var/tmp/bdevperf.sock 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72427 ']' 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:27.710 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.710 [2024-07-25 14:03:36.845927] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:27.710 [2024-07-25 14:03:36.845993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72427 ] 00:15:27.710 [2024-07-25 14:03:36.971261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.969 [2024-07-25 14:03:37.075063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.969 [2024-07-25 14:03:37.116525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:28.537 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.537 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:28.537 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rNpE9rjilP 00:15:28.797 [2024-07-25 14:03:37.890646] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:28.797 [2024-07-25 14:03:37.890751] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:28.797 [2024-07-25 14:03:37.900591] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:28.797 [2024-07-25 14:03:37.900646] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:28.797 [2024-07-25 14:03:37.900708] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:28.797 [2024-07-25 14:03:37.900841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e781f0 (107): Transport endpoint is not connected 00:15:28.797 [2024-07-25 14:03:37.901828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e781f0 (9): Bad file descriptor 00:15:28.797 [2024-07-25 14:03:37.902830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:28.797 [2024-07-25 14:03:37.902847] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:28.797 [2024-07-25 14:03:37.902857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:28.797 request: 00:15:28.797 { 00:15:28.797 "name": "TLSTEST", 00:15:28.797 "trtype": "tcp", 00:15:28.797 "traddr": "10.0.0.2", 00:15:28.797 "adrfam": "ipv4", 00:15:28.797 "trsvcid": "4420", 00:15:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:28.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:28.797 "prchk_reftag": false, 00:15:28.797 "prchk_guard": false, 00:15:28.797 "hdgst": false, 00:15:28.797 "ddgst": false, 00:15:28.797 "psk": "/tmp/tmp.rNpE9rjilP", 00:15:28.797 "method": "bdev_nvme_attach_controller", 00:15:28.797 "req_id": 1 00:15:28.797 } 00:15:28.797 Got JSON-RPC error response 00:15:28.797 response: 00:15:28.797 { 00:15:28.797 "code": -5, 00:15:28.797 "message": "Input/output error" 00:15:28.797 } 00:15:28.797 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72427 00:15:28.797 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72427 ']' 00:15:28.797 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72427 00:15:28.797 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:28.797 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.797 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72427 00:15:28.797 killing process with pid 72427 00:15:28.797 Received shutdown signal, test time was about 10.000000 seconds 00:15:28.797 00:15:28.797 Latency(us) 00:15:28.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.797 =================================================================================================================== 00:15:28.797 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:28.797 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:28.797 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:28.797 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72427' 00:15:28.797 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72427 00:15:28.797 [2024-07-25 14:03:37.949216] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:28.797 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72427 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72454 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72454 /var/tmp/bdevperf.sock 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72454 ']' 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.055 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.055 [2024-07-25 14:03:38.179336] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:29.055 [2024-07-25 14:03:38.179417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72454 ] 00:15:29.055 [2024-07-25 14:03:38.319369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.313 [2024-07-25 14:03:38.414289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.313 [2024-07-25 14:03:38.454343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:29.880 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:29.881 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:29.881 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:30.140 [2024-07-25 14:03:39.288130] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:30.140 [2024-07-25 14:03:39.289998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb68c00 (9): Bad file descriptor 00:15:30.140 [2024-07-25 14:03:39.290991] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:30.140 [2024-07-25 14:03:39.291014] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:30.140 [2024-07-25 14:03:39.291023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:30.140 request: 00:15:30.140 { 00:15:30.140 "name": "TLSTEST", 00:15:30.140 "trtype": "tcp", 00:15:30.140 "traddr": "10.0.0.2", 00:15:30.140 "adrfam": "ipv4", 00:15:30.140 "trsvcid": "4420", 00:15:30.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.140 "prchk_reftag": false, 00:15:30.140 "prchk_guard": false, 00:15:30.140 "hdgst": false, 00:15:30.140 "ddgst": false, 00:15:30.140 "method": "bdev_nvme_attach_controller", 00:15:30.140 "req_id": 1 00:15:30.140 } 00:15:30.140 Got JSON-RPC error response 00:15:30.140 response: 00:15:30.140 { 00:15:30.140 "code": -5, 00:15:30.140 "message": "Input/output error" 00:15:30.140 } 00:15:30.140 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72454 00:15:30.140 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72454 ']' 00:15:30.140 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72454 00:15:30.140 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:30.140 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.140 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72454 00:15:30.140 killing process with pid 72454 00:15:30.140 Received shutdown signal, test time was about 10.000000 seconds 00:15:30.140 00:15:30.140 Latency(us) 00:15:30.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.140 =================================================================================================================== 00:15:30.140 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:30.140 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:30.140 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:30.140 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72454' 00:15:30.140 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72454 00:15:30.140 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72454 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 72018 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72018 ']' 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72018 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72018 00:15:30.399 killing process with pid 72018 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72018' 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72018 00:15:30.399 [2024-07-25 14:03:39.567322] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:30.399 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72018 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.xCUlETHsE5 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.xCUlETHsE5 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72492 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72492 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72492 ']' 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.660 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.660 [2024-07-25 14:03:39.885698] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:30.660 [2024-07-25 14:03:39.885768] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.923 [2024-07-25 14:03:40.024171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.923 [2024-07-25 14:03:40.125119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.923 [2024-07-25 14:03:40.125160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.923 [2024-07-25 14:03:40.125182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.923 [2024-07-25 14:03:40.125187] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.923 [2024-07-25 14:03:40.125191] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.923 [2024-07-25 14:03:40.125211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.923 [2024-07-25 14:03:40.166639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:31.493 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.493 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:31.493 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.493 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:31.493 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.493 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.493 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.xCUlETHsE5 00:15:31.493 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xCUlETHsE5 00:15:31.493 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:31.755 [2024-07-25 14:03:40.993394] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.755 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:32.014 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:32.273 [2024-07-25 14:03:41.344770] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:32.273 [2024-07-25 14:03:41.344931] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.273 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:32.273 malloc0 00:15:32.273 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:32.531 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xCUlETHsE5 00:15:32.789 [2024-07-25 14:03:41.928341] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:32.789 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xCUlETHsE5 00:15:32.789 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xCUlETHsE5' 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72541 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72541 /var/tmp/bdevperf.sock 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72541 ']' 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.790 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.790 [2024-07-25 14:03:41.995650] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:32.790 [2024-07-25 14:03:41.995729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72541 ] 00:15:33.047 [2024-07-25 14:03:42.133520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.047 [2024-07-25 14:03:42.236711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.047 [2024-07-25 14:03:42.278510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:33.614 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.614 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:33.614 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xCUlETHsE5 00:15:33.873 [2024-07-25 14:03:43.029935] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:33.873 [2024-07-25 14:03:43.030044] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:33.873 TLSTESTn1 00:15:33.873 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:34.133 Running I/O for 10 seconds... 00:15:44.135 00:15:44.135 Latency(us) 00:15:44.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.135 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:44.135 Verification LBA range: start 0x0 length 0x2000 00:15:44.135 TLSTESTn1 : 10.01 5785.91 22.60 0.00 0.00 22085.57 4264.13 28503.87 00:15:44.135 =================================================================================================================== 00:15:44.135 Total : 5785.91 22.60 0.00 0.00 22085.57 4264.13 28503.87 00:15:44.135 0 00:15:44.135 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:44.135 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72541 00:15:44.135 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72541 ']' 00:15:44.135 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72541 00:15:44.135 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:44.135 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.135 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72541 00:15:44.135 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:44.135 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:44.135 killing process with pid 72541 00:15:44.135 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72541' 00:15:44.135 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72541 00:15:44.135 Received shutdown signal, test time was about 10.000000 seconds 00:15:44.135 00:15:44.135 Latency(us) 00:15:44.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.135 =================================================================================================================== 00:15:44.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:44.135 [2024-07-25 14:03:53.283016] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:44.135 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72541 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.xCUlETHsE5 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xCUlETHsE5 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xCUlETHsE5 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xCUlETHsE5 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xCUlETHsE5' 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72670 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72670 /var/tmp/bdevperf.sock 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72670 ']' 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:44.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:44.393 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:44.393 [2024-07-25 14:03:53.539777] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:44.393 [2024-07-25 14:03:53.539859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72670 ] 00:15:44.393 [2024-07-25 14:03:53.677360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.652 [2024-07-25 14:03:53.786859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.652 [2024-07-25 14:03:53.838281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:45.218 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:45.218 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:45.218 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xCUlETHsE5 00:15:45.476 [2024-07-25 14:03:54.672360] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:45.476 [2024-07-25 14:03:54.672434] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:45.476 [2024-07-25 14:03:54.672441] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.xCUlETHsE5 00:15:45.476 request: 00:15:45.476 { 00:15:45.476 "name": "TLSTEST", 00:15:45.476 "trtype": "tcp", 00:15:45.476 "traddr": "10.0.0.2", 00:15:45.476 "adrfam": "ipv4", 00:15:45.476 "trsvcid": "4420", 00:15:45.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:45.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:45.476 "prchk_reftag": false, 00:15:45.476 "prchk_guard": false, 00:15:45.476 "hdgst": false, 00:15:45.476 "ddgst": false, 00:15:45.476 "psk": "/tmp/tmp.xCUlETHsE5", 00:15:45.476 "method": "bdev_nvme_attach_controller", 00:15:45.476 "req_id": 1 00:15:45.476 } 00:15:45.476 Got JSON-RPC error response 00:15:45.476 response: 00:15:45.476 { 00:15:45.476 "code": -1, 00:15:45.476 "message": "Operation not permitted" 00:15:45.476 } 00:15:45.476 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72670 00:15:45.476 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72670 ']' 00:15:45.476 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72670 00:15:45.477 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:45.477 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:45.477 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72670 00:15:45.477 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:45.477 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:45.477 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72670' 00:15:45.477 killing process with pid 72670 00:15:45.477 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72670 00:15:45.477 Received shutdown signal, test time was about 10.000000 seconds 00:15:45.477 00:15:45.477 Latency(us) 00:15:45.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.477 =================================================================================================================== 00:15:45.477 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:45.477 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72670 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 72492 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72492 ']' 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72492 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72492 00:15:45.736 killing process with pid 72492 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72492' 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72492 00:15:45.736 [2024-07-25 14:03:54.952909] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:45.736 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72492 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72709 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72709 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72709 ']' 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:45.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:45.995 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:45.995 [2024-07-25 14:03:55.221044] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:45.995 [2024-07-25 14:03:55.221118] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.254 [2024-07-25 14:03:55.358334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.254 [2024-07-25 14:03:55.457057] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.254 [2024-07-25 14:03:55.457110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.254 [2024-07-25 14:03:55.457117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.254 [2024-07-25 14:03:55.457122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.254 [2024-07-25 14:03:55.457127] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.255 [2024-07-25 14:03:55.457155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.255 [2024-07-25 14:03:55.498663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:46.823 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:46.823 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:46.823 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:46.823 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:46.823 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.082 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.082 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.xCUlETHsE5 00:15:47.082 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:47.082 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.xCUlETHsE5 00:15:47.082 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:15:47.082 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.082 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:15:47.082 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.082 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.xCUlETHsE5 00:15:47.082 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xCUlETHsE5 00:15:47.082 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:47.082 [2024-07-25 14:03:56.333619] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.082 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:47.341 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:47.600 [2024-07-25 14:03:56.700986] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:47.600 [2024-07-25 14:03:56.701176] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.600 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:47.861 malloc0 00:15:47.861 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:48.122 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xCUlETHsE5 00:15:48.122 [2024-07-25 14:03:57.372679] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:48.122 [2024-07-25 14:03:57.372734] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:48.122 [2024-07-25 14:03:57.372766] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:48.122 request: 00:15:48.122 { 00:15:48.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:48.122 "host": "nqn.2016-06.io.spdk:host1", 00:15:48.122 "psk": "/tmp/tmp.xCUlETHsE5", 00:15:48.122 "method": "nvmf_subsystem_add_host", 00:15:48.122 "req_id": 1 00:15:48.122 } 00:15:48.122 Got JSON-RPC error response 00:15:48.122 response: 00:15:48.122 { 00:15:48.122 "code": -32603, 00:15:48.122 "message": "Internal error" 00:15:48.122 } 00:15:48.122 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:48.122 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:48.122 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:48.122 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:48.122 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 72709 00:15:48.122 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72709 ']' 00:15:48.122 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72709 00:15:48.123 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:48.123 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:48.123 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72709 00:15:48.123 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:48.123 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:48.123 killing process with pid 72709 00:15:48.123 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72709' 00:15:48.123 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72709 00:15:48.123 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72709 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.xCUlETHsE5 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72767 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72767 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72767 ']' 00:15:48.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:48.382 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:48.641 [2024-07-25 14:03:57.694359] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:48.641 [2024-07-25 14:03:57.694432] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.641 [2024-07-25 14:03:57.827791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.641 [2024-07-25 14:03:57.930113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.641 [2024-07-25 14:03:57.930163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.641 [2024-07-25 14:03:57.930169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.641 [2024-07-25 14:03:57.930174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.641 [2024-07-25 14:03:57.930178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.641 [2024-07-25 14:03:57.930199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.899 [2024-07-25 14:03:57.972519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:49.468 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.468 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:49.468 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.468 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:49.468 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:49.468 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.468 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.xCUlETHsE5 00:15:49.468 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xCUlETHsE5 00:15:49.468 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:49.727 [2024-07-25 14:03:58.792627] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.727 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:49.985 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:49.985 [2024-07-25 14:03:59.243845] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:49.985 [2024-07-25 14:03:59.244031] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.985 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:50.244 malloc0 00:15:50.244 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:50.504 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xCUlETHsE5 00:15:50.764 [2024-07-25 14:03:59.871659] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:50.764 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=72816 00:15:50.764 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:50.764 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:50.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.764 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 72816 /var/tmp/bdevperf.sock 00:15:50.764 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72816 ']' 00:15:50.764 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.764 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:50.764 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.764 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:50.764 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:50.764 [2024-07-25 14:03:59.943582] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:50.764 [2024-07-25 14:03:59.943652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72816 ] 00:15:51.024 [2024-07-25 14:04:00.081060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.024 [2024-07-25 14:04:00.187077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.024 [2024-07-25 14:04:00.230102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:51.642 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.642 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:51.642 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xCUlETHsE5 00:15:51.903 [2024-07-25 14:04:01.066160] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:51.903 [2024-07-25 14:04:01.066431] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:51.903 TLSTESTn1 00:15:51.903 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:52.471 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:52.471 "subsystems": [ 00:15:52.471 { 00:15:52.471 "subsystem": "keyring", 00:15:52.471 "config": [] 00:15:52.471 }, 00:15:52.471 { 00:15:52.471 "subsystem": "iobuf", 00:15:52.471 "config": [ 00:15:52.471 { 00:15:52.471 "method": "iobuf_set_options", 00:15:52.471 "params": { 00:15:52.471 "small_pool_count": 8192, 00:15:52.471 "large_pool_count": 1024, 00:15:52.471 "small_bufsize": 8192, 00:15:52.471 "large_bufsize": 135168 00:15:52.471 } 00:15:52.471 } 00:15:52.471 ] 00:15:52.471 }, 00:15:52.471 { 00:15:52.471 "subsystem": "sock", 00:15:52.471 "config": [ 00:15:52.471 { 00:15:52.471 "method": "sock_set_default_impl", 00:15:52.471 "params": { 00:15:52.471 "impl_name": "uring" 00:15:52.471 } 00:15:52.471 }, 00:15:52.471 { 00:15:52.471 "method": "sock_impl_set_options", 00:15:52.471 "params": { 00:15:52.471 "impl_name": "ssl", 00:15:52.471 "recv_buf_size": 4096, 00:15:52.471 "send_buf_size": 4096, 00:15:52.471 "enable_recv_pipe": true, 00:15:52.471 "enable_quickack": false, 00:15:52.471 "enable_placement_id": 0, 00:15:52.471 "enable_zerocopy_send_server": true, 00:15:52.471 "enable_zerocopy_send_client": false, 00:15:52.471 "zerocopy_threshold": 0, 00:15:52.471 "tls_version": 0, 00:15:52.471 "enable_ktls": false 00:15:52.471 } 00:15:52.471 }, 00:15:52.471 { 00:15:52.471 "method": "sock_impl_set_options", 00:15:52.471 "params": { 00:15:52.471 "impl_name": "posix", 00:15:52.471 "recv_buf_size": 2097152, 00:15:52.471 "send_buf_size": 2097152, 00:15:52.471 "enable_recv_pipe": true, 00:15:52.471 "enable_quickack": false, 00:15:52.471 "enable_placement_id": 0, 00:15:52.471 "enable_zerocopy_send_server": true, 00:15:52.471 "enable_zerocopy_send_client": false, 00:15:52.471 "zerocopy_threshold": 0, 00:15:52.471 "tls_version": 0, 00:15:52.471 "enable_ktls": false 00:15:52.471 } 00:15:52.471 }, 00:15:52.471 { 00:15:52.471 "method": "sock_impl_set_options", 00:15:52.471 "params": { 00:15:52.471 "impl_name": "uring", 00:15:52.471 "recv_buf_size": 2097152, 00:15:52.471 "send_buf_size": 2097152, 00:15:52.471 "enable_recv_pipe": true, 00:15:52.471 "enable_quickack": false, 00:15:52.471 "enable_placement_id": 0, 00:15:52.471 "enable_zerocopy_send_server": false, 00:15:52.471 "enable_zerocopy_send_client": false, 00:15:52.471 "zerocopy_threshold": 0, 00:15:52.471 "tls_version": 0, 00:15:52.471 "enable_ktls": false 00:15:52.471 } 00:15:52.471 } 00:15:52.471 ] 00:15:52.471 }, 00:15:52.471 { 00:15:52.471 "subsystem": "vmd", 00:15:52.471 "config": [] 00:15:52.471 }, 00:15:52.471 { 00:15:52.471 "subsystem": "accel", 00:15:52.471 "config": [ 00:15:52.471 { 00:15:52.471 "method": "accel_set_options", 00:15:52.471 "params": { 00:15:52.471 "small_cache_size": 128, 00:15:52.471 "large_cache_size": 16, 00:15:52.471 "task_count": 2048, 00:15:52.471 "sequence_count": 2048, 00:15:52.471 "buf_count": 2048 00:15:52.471 } 00:15:52.471 } 00:15:52.471 ] 00:15:52.471 }, 00:15:52.471 { 00:15:52.471 "subsystem": "bdev", 00:15:52.471 "config": [ 00:15:52.471 { 00:15:52.471 "method": "bdev_set_options", 00:15:52.471 "params": { 00:15:52.471 "bdev_io_pool_size": 65535, 00:15:52.471 "bdev_io_cache_size": 256, 00:15:52.471 "bdev_auto_examine": true, 00:15:52.471 "iobuf_small_cache_size": 128, 00:15:52.471 "iobuf_large_cache_size": 16 00:15:52.471 } 00:15:52.471 }, 00:15:52.471 { 00:15:52.471 "method": "bdev_raid_set_options", 00:15:52.471 "params": { 00:15:52.471 "process_window_size_kb": 1024, 00:15:52.471 "process_max_bandwidth_mb_sec": 0 00:15:52.471 } 00:15:52.471 }, 00:15:52.471 { 00:15:52.471 "method": "bdev_iscsi_set_options", 00:15:52.471 "params": { 00:15:52.471 "timeout_sec": 30 00:15:52.471 } 00:15:52.471 }, 00:15:52.471 { 00:15:52.471 "method": "bdev_nvme_set_options", 00:15:52.471 "params": { 00:15:52.471 "action_on_timeout": "none", 00:15:52.471 "timeout_us": 0, 00:15:52.471 "timeout_admin_us": 0, 00:15:52.471 "keep_alive_timeout_ms": 10000, 00:15:52.471 "arbitration_burst": 0, 00:15:52.471 "low_priority_weight": 0, 00:15:52.471 "medium_priority_weight": 0, 00:15:52.471 "high_priority_weight": 0, 00:15:52.471 "nvme_adminq_poll_period_us": 10000, 00:15:52.471 "nvme_ioq_poll_period_us": 0, 00:15:52.471 "io_queue_requests": 0, 00:15:52.471 "delay_cmd_submit": true, 00:15:52.471 "transport_retry_count": 4, 00:15:52.471 "bdev_retry_count": 3, 00:15:52.471 "transport_ack_timeout": 0, 00:15:52.471 "ctrlr_loss_timeout_sec": 0, 00:15:52.471 "reconnect_delay_sec": 0, 00:15:52.471 "fast_io_fail_timeout_sec": 0, 00:15:52.471 "disable_auto_failback": false, 00:15:52.471 "generate_uuids": false, 00:15:52.471 "transport_tos": 0, 00:15:52.471 "nvme_error_stat": false, 00:15:52.471 "rdma_srq_size": 0, 00:15:52.472 "io_path_stat": false, 00:15:52.472 "allow_accel_sequence": false, 00:15:52.472 "rdma_max_cq_size": 0, 00:15:52.472 "rdma_cm_event_timeout_ms": 0, 00:15:52.472 "dhchap_digests": [ 00:15:52.472 "sha256", 00:15:52.472 "sha384", 00:15:52.472 "sha512" 00:15:52.472 ], 00:15:52.472 "dhchap_dhgroups": [ 00:15:52.472 "null", 00:15:52.472 "ffdhe2048", 00:15:52.472 "ffdhe3072", 00:15:52.472 "ffdhe4096", 00:15:52.472 "ffdhe6144", 00:15:52.472 "ffdhe8192" 00:15:52.472 ] 00:15:52.472 } 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "method": "bdev_nvme_set_hotplug", 00:15:52.472 "params": { 00:15:52.472 "period_us": 100000, 00:15:52.472 "enable": false 00:15:52.472 } 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "method": "bdev_malloc_create", 00:15:52.472 "params": { 00:15:52.472 "name": "malloc0", 00:15:52.472 "num_blocks": 8192, 00:15:52.472 "block_size": 4096, 00:15:52.472 "physical_block_size": 4096, 00:15:52.472 "uuid": "298fe509-c209-4532-878b-27976dc1c2f3", 00:15:52.472 "optimal_io_boundary": 0, 00:15:52.472 "md_size": 0, 00:15:52.472 "dif_type": 0, 00:15:52.472 "dif_is_head_of_md": false, 00:15:52.472 "dif_pi_format": 0 00:15:52.472 } 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "method": "bdev_wait_for_examine" 00:15:52.472 } 00:15:52.472 ] 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "subsystem": "nbd", 00:15:52.472 "config": [] 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "subsystem": "scheduler", 00:15:52.472 "config": [ 00:15:52.472 { 00:15:52.472 "method": "framework_set_scheduler", 00:15:52.472 "params": { 00:15:52.472 "name": "static" 00:15:52.472 } 00:15:52.472 } 00:15:52.472 ] 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "subsystem": "nvmf", 00:15:52.472 "config": [ 00:15:52.472 { 00:15:52.472 "method": "nvmf_set_config", 00:15:52.472 "params": { 00:15:52.472 "discovery_filter": "match_any", 00:15:52.472 "admin_cmd_passthru": { 00:15:52.472 "identify_ctrlr": false 00:15:52.472 } 00:15:52.472 } 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "method": "nvmf_set_max_subsystems", 00:15:52.472 "params": { 00:15:52.472 "max_subsystems": 1024 00:15:52.472 } 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "method": "nvmf_set_crdt", 00:15:52.472 "params": { 00:15:52.472 "crdt1": 0, 00:15:52.472 "crdt2": 0, 00:15:52.472 "crdt3": 0 00:15:52.472 } 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "method": "nvmf_create_transport", 00:15:52.472 "params": { 00:15:52.472 "trtype": "TCP", 00:15:52.472 "max_queue_depth": 128, 00:15:52.472 "max_io_qpairs_per_ctrlr": 127, 00:15:52.472 "in_capsule_data_size": 4096, 00:15:52.472 "max_io_size": 131072, 00:15:52.472 "io_unit_size": 131072, 00:15:52.472 "max_aq_depth": 128, 00:15:52.472 "num_shared_buffers": 511, 00:15:52.472 "buf_cache_size": 4294967295, 00:15:52.472 "dif_insert_or_strip": false, 00:15:52.472 "zcopy": false, 00:15:52.472 "c2h_success": false, 00:15:52.472 "sock_priority": 0, 00:15:52.472 "abort_timeout_sec": 1, 00:15:52.472 "ack_timeout": 0, 00:15:52.472 "data_wr_pool_size": 0 00:15:52.472 } 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "method": "nvmf_create_subsystem", 00:15:52.472 "params": { 00:15:52.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.472 "allow_any_host": false, 00:15:52.472 "serial_number": "SPDK00000000000001", 00:15:52.472 "model_number": "SPDK bdev Controller", 00:15:52.472 "max_namespaces": 10, 00:15:52.472 "min_cntlid": 1, 00:15:52.472 "max_cntlid": 65519, 00:15:52.472 "ana_reporting": false 00:15:52.472 } 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "method": "nvmf_subsystem_add_host", 00:15:52.472 "params": { 00:15:52.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.472 "host": "nqn.2016-06.io.spdk:host1", 00:15:52.472 "psk": "/tmp/tmp.xCUlETHsE5" 00:15:52.472 } 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "method": "nvmf_subsystem_add_ns", 00:15:52.472 "params": { 00:15:52.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.472 "namespace": { 00:15:52.472 "nsid": 1, 00:15:52.472 "bdev_name": "malloc0", 00:15:52.472 "nguid": "298FE509C2094532878B27976DC1C2F3", 00:15:52.472 "uuid": "298fe509-c209-4532-878b-27976dc1c2f3", 00:15:52.472 "no_auto_visible": false 00:15:52.472 } 00:15:52.472 } 00:15:52.472 }, 00:15:52.472 { 00:15:52.472 "method": "nvmf_subsystem_add_listener", 00:15:52.472 "params": { 00:15:52.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.472 "listen_address": { 00:15:52.472 "trtype": "TCP", 00:15:52.472 "adrfam": "IPv4", 00:15:52.472 "traddr": "10.0.0.2", 00:15:52.472 "trsvcid": "4420" 00:15:52.472 }, 00:15:52.472 "secure_channel": true 00:15:52.472 } 00:15:52.472 } 00:15:52.472 ] 00:15:52.472 } 00:15:52.472 ] 00:15:52.472 }' 00:15:52.472 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:52.732 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:52.732 "subsystems": [ 00:15:52.732 { 00:15:52.732 "subsystem": "keyring", 00:15:52.732 "config": [] 00:15:52.732 }, 00:15:52.732 { 00:15:52.732 "subsystem": "iobuf", 00:15:52.732 "config": [ 00:15:52.732 { 00:15:52.732 "method": "iobuf_set_options", 00:15:52.732 "params": { 00:15:52.732 "small_pool_count": 8192, 00:15:52.732 "large_pool_count": 1024, 00:15:52.732 "small_bufsize": 8192, 00:15:52.732 "large_bufsize": 135168 00:15:52.732 } 00:15:52.732 } 00:15:52.732 ] 00:15:52.732 }, 00:15:52.732 { 00:15:52.732 "subsystem": "sock", 00:15:52.732 "config": [ 00:15:52.732 { 00:15:52.732 "method": "sock_set_default_impl", 00:15:52.732 "params": { 00:15:52.732 "impl_name": "uring" 00:15:52.732 } 00:15:52.732 }, 00:15:52.732 { 00:15:52.732 "method": "sock_impl_set_options", 00:15:52.732 "params": { 00:15:52.732 "impl_name": "ssl", 00:15:52.732 "recv_buf_size": 4096, 00:15:52.732 "send_buf_size": 4096, 00:15:52.732 "enable_recv_pipe": true, 00:15:52.732 "enable_quickack": false, 00:15:52.732 "enable_placement_id": 0, 00:15:52.732 "enable_zerocopy_send_server": true, 00:15:52.732 "enable_zerocopy_send_client": false, 00:15:52.732 "zerocopy_threshold": 0, 00:15:52.732 "tls_version": 0, 00:15:52.732 "enable_ktls": false 00:15:52.732 } 00:15:52.732 }, 00:15:52.732 { 00:15:52.732 "method": "sock_impl_set_options", 00:15:52.732 "params": { 00:15:52.732 "impl_name": "posix", 00:15:52.732 "recv_buf_size": 2097152, 00:15:52.732 "send_buf_size": 2097152, 00:15:52.732 "enable_recv_pipe": true, 00:15:52.732 "enable_quickack": false, 00:15:52.732 "enable_placement_id": 0, 00:15:52.732 "enable_zerocopy_send_server": true, 00:15:52.732 "enable_zerocopy_send_client": false, 00:15:52.732 "zerocopy_threshold": 0, 00:15:52.732 "tls_version": 0, 00:15:52.732 "enable_ktls": false 00:15:52.732 } 00:15:52.732 }, 00:15:52.732 { 00:15:52.732 "method": "sock_impl_set_options", 00:15:52.732 "params": { 00:15:52.732 "impl_name": "uring", 00:15:52.732 "recv_buf_size": 2097152, 00:15:52.732 "send_buf_size": 2097152, 00:15:52.732 "enable_recv_pipe": true, 00:15:52.732 "enable_quickack": false, 00:15:52.732 "enable_placement_id": 0, 00:15:52.732 "enable_zerocopy_send_server": false, 00:15:52.732 "enable_zerocopy_send_client": false, 00:15:52.732 "zerocopy_threshold": 0, 00:15:52.732 "tls_version": 0, 00:15:52.732 "enable_ktls": false 00:15:52.732 } 00:15:52.732 } 00:15:52.732 ] 00:15:52.732 }, 00:15:52.732 { 00:15:52.732 "subsystem": "vmd", 00:15:52.732 "config": [] 00:15:52.732 }, 00:15:52.732 { 00:15:52.732 "subsystem": "accel", 00:15:52.732 "config": [ 00:15:52.732 { 00:15:52.733 "method": "accel_set_options", 00:15:52.733 "params": { 00:15:52.733 "small_cache_size": 128, 00:15:52.733 "large_cache_size": 16, 00:15:52.733 "task_count": 2048, 00:15:52.733 "sequence_count": 2048, 00:15:52.733 "buf_count": 2048 00:15:52.733 } 00:15:52.733 } 00:15:52.733 ] 00:15:52.733 }, 00:15:52.733 { 00:15:52.733 "subsystem": "bdev", 00:15:52.733 "config": [ 00:15:52.733 { 00:15:52.733 "method": "bdev_set_options", 00:15:52.733 "params": { 00:15:52.733 "bdev_io_pool_size": 65535, 00:15:52.733 "bdev_io_cache_size": 256, 00:15:52.733 "bdev_auto_examine": true, 00:15:52.733 "iobuf_small_cache_size": 128, 00:15:52.733 "iobuf_large_cache_size": 16 00:15:52.733 } 00:15:52.733 }, 00:15:52.733 { 00:15:52.733 "method": "bdev_raid_set_options", 00:15:52.733 "params": { 00:15:52.733 "process_window_size_kb": 1024, 00:15:52.733 "process_max_bandwidth_mb_sec": 0 00:15:52.733 } 00:15:52.733 }, 00:15:52.733 { 00:15:52.733 "method": "bdev_iscsi_set_options", 00:15:52.733 "params": { 00:15:52.733 "timeout_sec": 30 00:15:52.733 } 00:15:52.733 }, 00:15:52.733 { 00:15:52.733 "method": "bdev_nvme_set_options", 00:15:52.733 "params": { 00:15:52.733 "action_on_timeout": "none", 00:15:52.733 "timeout_us": 0, 00:15:52.733 "timeout_admin_us": 0, 00:15:52.733 "keep_alive_timeout_ms": 10000, 00:15:52.733 "arbitration_burst": 0, 00:15:52.733 "low_priority_weight": 0, 00:15:52.733 "medium_priority_weight": 0, 00:15:52.733 "high_priority_weight": 0, 00:15:52.733 "nvme_adminq_poll_period_us": 10000, 00:15:52.733 "nvme_ioq_poll_period_us": 0, 00:15:52.733 "io_queue_requests": 512, 00:15:52.733 "delay_cmd_submit": true, 00:15:52.733 "transport_retry_count": 4, 00:15:52.733 "bdev_retry_count": 3, 00:15:52.733 "transport_ack_timeout": 0, 00:15:52.733 "ctrlr_loss_timeout_sec": 0, 00:15:52.733 "reconnect_delay_sec": 0, 00:15:52.733 "fast_io_fail_timeout_sec": 0, 00:15:52.733 "disable_auto_failback": false, 00:15:52.733 "generate_uuids": false, 00:15:52.733 "transport_tos": 0, 00:15:52.733 "nvme_error_stat": false, 00:15:52.733 "rdma_srq_size": 0, 00:15:52.733 "io_path_stat": false, 00:15:52.733 "allow_accel_sequence": false, 00:15:52.733 "rdma_max_cq_size": 0, 00:15:52.733 "rdma_cm_event_timeout_ms": 0, 00:15:52.733 "dhchap_digests": [ 00:15:52.733 "sha256", 00:15:52.733 "sha384", 00:15:52.733 "sha512" 00:15:52.733 ], 00:15:52.733 "dhchap_dhgroups": [ 00:15:52.733 "null", 00:15:52.733 "ffdhe2048", 00:15:52.733 "ffdhe3072", 00:15:52.733 "ffdhe4096", 00:15:52.733 "ffdhe6144", 00:15:52.733 "ffdhe8192" 00:15:52.733 ] 00:15:52.733 } 00:15:52.733 }, 00:15:52.733 { 00:15:52.733 "method": "bdev_nvme_attach_controller", 00:15:52.733 "params": { 00:15:52.733 "name": "TLSTEST", 00:15:52.733 "trtype": "TCP", 00:15:52.733 "adrfam": "IPv4", 00:15:52.733 "traddr": "10.0.0.2", 00:15:52.733 "trsvcid": "4420", 00:15:52.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.733 "prchk_reftag": false, 00:15:52.733 "prchk_guard": false, 00:15:52.733 "ctrlr_loss_timeout_sec": 0, 00:15:52.733 "reconnect_delay_sec": 0, 00:15:52.733 "fast_io_fail_timeout_sec": 0, 00:15:52.733 "psk": "/tmp/tmp.xCUlETHsE5", 00:15:52.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.733 "hdgst": false, 00:15:52.733 "ddgst": false 00:15:52.733 } 00:15:52.733 }, 00:15:52.733 { 00:15:52.733 "method": "bdev_nvme_set_hotplug", 00:15:52.733 "params": { 00:15:52.733 "period_us": 100000, 00:15:52.733 "enable": false 00:15:52.733 } 00:15:52.733 }, 00:15:52.733 { 00:15:52.733 "method": "bdev_wait_for_examine" 00:15:52.733 } 00:15:52.733 ] 00:15:52.733 }, 00:15:52.733 { 00:15:52.733 "subsystem": "nbd", 00:15:52.733 "config": [] 00:15:52.733 } 00:15:52.733 ] 00:15:52.733 }' 00:15:52.733 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 72816 00:15:52.733 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72816 ']' 00:15:52.733 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72816 00:15:52.733 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:52.733 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.733 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72816 00:15:52.733 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:52.733 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:52.733 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72816' 00:15:52.733 killing process with pid 72816 00:15:52.733 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72816 00:15:52.733 Received shutdown signal, test time was about 10.000000 seconds 00:15:52.733 00:15:52.733 Latency(us) 00:15:52.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.733 =================================================================================================================== 00:15:52.733 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:52.733 [2024-07-25 14:04:01.845362] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:52.733 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72816 00:15:52.994 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 72767 00:15:52.994 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72767 ']' 00:15:52.994 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72767 00:15:52.994 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:52.994 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.994 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72767 00:15:52.994 killing process with pid 72767 00:15:52.994 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:52.994 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:52.994 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72767' 00:15:52.994 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72767 00:15:52.994 [2024-07-25 14:04:02.073992] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:52.994 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72767 00:15:52.994 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:15:52.994 "subsystems": [ 00:15:52.994 { 00:15:52.994 "subsystem": "keyring", 00:15:52.994 "config": [] 00:15:52.994 }, 00:15:52.994 { 00:15:52.994 "subsystem": "iobuf", 00:15:52.994 "config": [ 00:15:52.994 { 00:15:52.994 "method": "iobuf_set_options", 00:15:52.994 "params": { 00:15:52.994 "small_pool_count": 8192, 00:15:52.994 "large_pool_count": 1024, 00:15:52.994 "small_bufsize": 8192, 00:15:52.994 "large_bufsize": 135168 00:15:52.994 } 00:15:52.994 } 00:15:52.994 ] 00:15:52.994 }, 00:15:52.994 { 00:15:52.994 "subsystem": "sock", 00:15:52.994 "config": [ 00:15:52.994 { 00:15:52.994 "method": "sock_set_default_impl", 00:15:52.994 "params": { 00:15:52.994 "impl_name": "uring" 00:15:52.994 } 00:15:52.994 }, 00:15:52.994 { 00:15:52.994 "method": "sock_impl_set_options", 00:15:52.994 "params": { 00:15:52.994 "impl_name": "ssl", 00:15:52.994 "recv_buf_size": 4096, 00:15:52.994 "send_buf_size": 4096, 00:15:52.994 "enable_recv_pipe": true, 00:15:52.994 "enable_quickack": false, 00:15:52.994 "enable_placement_id": 0, 00:15:52.994 "enable_zerocopy_send_server": true, 00:15:52.994 "enable_zerocopy_send_client": false, 00:15:52.994 "zerocopy_threshold": 0, 00:15:52.994 "tls_version": 0, 00:15:52.994 "enable_ktls": false 00:15:52.994 } 00:15:52.994 }, 00:15:52.994 { 00:15:52.994 "method": "sock_impl_set_options", 00:15:52.994 "params": { 00:15:52.994 "impl_name": "posix", 00:15:52.994 "recv_buf_size": 2097152, 00:15:52.994 "send_buf_size": 2097152, 00:15:52.994 "enable_recv_pipe": true, 00:15:52.994 "enable_quickack": false, 00:15:52.994 "enable_placement_id": 0, 00:15:52.994 "enable_zerocopy_send_server": true, 00:15:52.994 "enable_zerocopy_send_client": false, 00:15:52.994 "zerocopy_threshold": 0, 00:15:52.994 "tls_version": 0, 00:15:52.994 "enable_ktls": false 00:15:52.994 } 00:15:52.994 }, 00:15:52.994 { 00:15:52.994 "method": "sock_impl_set_options", 00:15:52.994 "params": { 00:15:52.994 "impl_name": "uring", 00:15:52.994 "recv_buf_size": 2097152, 00:15:52.994 "send_buf_size": 2097152, 00:15:52.994 "enable_recv_pipe": true, 00:15:52.994 "enable_quickack": false, 00:15:52.994 "enable_placement_id": 0, 00:15:52.994 "enable_zerocopy_send_server": false, 00:15:52.994 "enable_zerocopy_send_client": false, 00:15:52.994 "zerocopy_threshold": 0, 00:15:52.994 "tls_version": 0, 00:15:52.994 "enable_ktls": false 00:15:52.994 } 00:15:52.994 } 00:15:52.994 ] 00:15:52.994 }, 00:15:52.994 { 00:15:52.994 "subsystem": "vmd", 00:15:52.994 "config": [] 00:15:52.994 }, 00:15:52.994 { 00:15:52.994 "subsystem": "accel", 00:15:52.994 "config": [ 00:15:52.994 { 00:15:52.994 "method": "accel_set_options", 00:15:52.994 "params": { 00:15:52.994 "small_cache_size": 128, 00:15:52.994 "large_cache_size": 16, 00:15:52.994 "task_count": 2048, 00:15:52.994 "sequence_count": 2048, 00:15:52.994 "buf_count": 2048 00:15:52.994 } 00:15:52.994 } 00:15:52.994 ] 00:15:52.994 }, 00:15:52.994 { 00:15:52.994 "subsystem": "bdev", 00:15:52.995 "config": [ 00:15:52.995 { 00:15:52.995 "method": "bdev_set_options", 00:15:52.995 "params": { 00:15:52.995 "bdev_io_pool_size": 65535, 00:15:52.995 "bdev_io_cache_size": 256, 00:15:52.995 "bdev_auto_examine": true, 00:15:52.995 "iobuf_small_cache_size": 128, 00:15:52.995 "iobuf_large_cache_size": 16 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "bdev_raid_set_options", 00:15:52.995 "params": { 00:15:52.995 "process_window_size_kb": 1024, 00:15:52.995 "process_max_bandwidth_mb_sec": 0 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "bdev_iscsi_set_options", 00:15:52.995 "params": { 00:15:52.995 "timeout_sec": 30 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "bdev_nvme_set_options", 00:15:52.995 "params": { 00:15:52.995 "action_on_timeout": "none", 00:15:52.995 "timeout_us": 0, 00:15:52.995 "timeout_admin_us": 0, 00:15:52.995 "keep_alive_timeout_ms": 10000, 00:15:52.995 "arbitration_burst": 0, 00:15:52.995 "low_priority_weight": 0, 00:15:52.995 "medium_priority_weight": 0, 00:15:52.995 "high_priority_weight": 0, 00:15:52.995 "nvme_adminq_poll_period_us": 10000, 00:15:52.995 "nvme_ioq_poll_period_us": 0, 00:15:52.995 "io_queue_requests": 0, 00:15:52.995 "delay_cmd_submit": true, 00:15:52.995 "transport_retry_count": 4, 00:15:52.995 "bdev_retry_count": 3, 00:15:52.995 "transport_ack_timeout": 0, 00:15:52.995 "ctrlr_loss_timeout_sec": 0, 00:15:52.995 "reconnect_delay_sec": 0, 00:15:52.995 "fast_io_fail_timeout_sec": 0, 00:15:52.995 "disable_auto_failback": false, 00:15:52.995 "generate_uuids": false, 00:15:52.995 "transport_tos": 0, 00:15:52.995 "nvme_error_stat": false, 00:15:52.995 "rdma_srq_size": 0, 00:15:52.995 "io_path_stat": false, 00:15:52.995 "allow_accel_sequence": false, 00:15:52.995 "rdma_max_cq_size": 0, 00:15:52.995 "rdma_cm_event_timeout_ms": 0, 00:15:52.995 "dhchap_digests": [ 00:15:52.995 "sha256", 00:15:52.995 "sha384", 00:15:52.995 "sha512" 00:15:52.995 ], 00:15:52.995 "dhchap_dhgroups": [ 00:15:52.995 "null", 00:15:52.995 "ffdhe2048", 00:15:52.995 "ffdhe3072", 00:15:52.995 "ffdhe4096", 00:15:52.995 "ffdhe6144", 00:15:52.995 "ffdhe8192" 00:15:52.995 ] 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "bdev_nvme_set_hotplug", 00:15:52.995 "params": { 00:15:52.995 "period_us": 100000, 00:15:52.995 "enable": false 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "bdev_malloc_create", 00:15:52.995 "params": { 00:15:52.995 "name": "malloc0", 00:15:52.995 "num_blocks": 8192, 00:15:52.995 "block_size": 4096, 00:15:52.995 "physical_block_size": 4096, 00:15:52.995 "uuid": "298fe509-c209-4532-878b-27976dc1c2f3", 00:15:52.995 "optimal_io_boundary": 0, 00:15:52.995 "md_size": 0, 00:15:52.995 "dif_type": 0, 00:15:52.995 "dif_is_head_of_md": false, 00:15:52.995 "dif_pi_format": 0 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "bdev_wait_for_examine" 00:15:52.995 } 00:15:52.995 ] 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "subsystem": "nbd", 00:15:52.995 "config": [] 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "subsystem": "scheduler", 00:15:52.995 "config": [ 00:15:52.995 { 00:15:52.995 "method": "framework_set_scheduler", 00:15:52.995 "params": { 00:15:52.995 "name": "static" 00:15:52.995 } 00:15:52.995 } 00:15:52.995 ] 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "subsystem": "nvmf", 00:15:52.995 "config": [ 00:15:52.995 { 00:15:52.995 "method": "nvmf_set_config", 00:15:52.995 "params": { 00:15:52.995 "discovery_filter": "match_any", 00:15:52.995 "admin_cmd_passthru": { 00:15:52.995 "identify_ctrlr": false 00:15:52.995 } 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "nvmf_set_max_subsystems", 00:15:52.995 "params": { 00:15:52.995 "max_subsystems": 1024 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "nvmf_set_crdt", 00:15:52.995 "params": { 00:15:52.995 "crdt1": 0, 00:15:52.995 "crdt2": 0, 00:15:52.995 "crdt3": 0 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "nvmf_create_transport", 00:15:52.995 "params": { 00:15:52.995 "trtype": "TCP", 00:15:52.995 "max_queue_depth": 128, 00:15:52.995 "max_io_qpairs_per_ctrlr": 127, 00:15:52.995 "in_capsule_data_size": 4096, 00:15:52.995 "max_io_size": 131072, 00:15:52.995 "io_unit_size": 131072, 00:15:52.995 "max_aq_depth": 128, 00:15:52.995 "num_shared_buffers": 511, 00:15:52.995 "buf_cache_size": 4294967295, 00:15:52.995 "dif_insert_or_strip": false, 00:15:52.995 "zcopy": false, 00:15:52.995 "c2h_success": false, 00:15:52.995 "sock_priority": 0, 00:15:52.995 "abort_timeout_sec": 1, 00:15:52.995 "ack_timeout": 0, 00:15:52.995 "data_wr_pool_size": 0 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "nvmf_create_subsystem", 00:15:52.995 "params": { 00:15:52.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.995 "allow_any_host": false, 00:15:52.995 "serial_number": "SPDK00000000000001", 00:15:52.995 "model_number": "SPDK bdev Controller", 00:15:52.995 "max_namespaces": 10, 00:15:52.995 "min_cntlid": 1, 00:15:52.995 "max_cntlid": 65519, 00:15:52.995 "ana_reporting": false 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "nvmf_subsystem_add_host", 00:15:52.995 "params": { 00:15:52.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.995 "host": "nqn.2016-06.io.spdk:host1", 00:15:52.995 "psk": "/tmp/tmp.xCUlETHsE5" 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "nvmf_subsystem_add_ns", 00:15:52.995 "params": { 00:15:52.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.995 "namespace": { 00:15:52.995 "nsid": 1, 00:15:52.995 "bdev_name": "malloc0", 00:15:52.995 "nguid": "298FE509C2094532878B27976DC1C2F3", 00:15:52.995 "uuid": "298fe509-c209-4532-878b-27976dc1c2f3", 00:15:52.995 "no_auto_visible": false 00:15:52.995 } 00:15:52.995 } 00:15:52.995 }, 00:15:52.995 { 00:15:52.995 "method": "nvmf_subsystem_add_listener", 00:15:52.995 "params": { 00:15:52.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.995 "listen_address": { 00:15:52.995 "trtype": "TCP", 00:15:52.995 "adrfam": "IPv4", 00:15:52.995 "traddr": "10.0.0.2", 00:15:52.995 "trsvcid": "4420" 00:15:52.995 }, 00:15:52.995 "secure_channel": true 00:15:52.995 } 00:15:52.995 } 00:15:52.995 ] 00:15:52.995 } 00:15:52.995 ] 00:15:52.995 }' 00:15:52.995 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:52.995 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.995 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:52.995 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.995 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72864 00:15:52.995 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:52.995 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72864 00:15:52.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.996 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72864 ']' 00:15:52.996 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.996 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.996 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.996 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.996 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 [2024-07-25 14:04:02.329405] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:53.257 [2024-07-25 14:04:02.329471] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.257 [2024-07-25 14:04:02.468780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.516 [2024-07-25 14:04:02.571829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.516 [2024-07-25 14:04:02.571968] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.516 [2024-07-25 14:04:02.572023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.516 [2024-07-25 14:04:02.572052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.516 [2024-07-25 14:04:02.572069] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.516 [2024-07-25 14:04:02.572155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.516 [2024-07-25 14:04:02.727684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:53.516 [2024-07-25 14:04:02.790781] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.516 [2024-07-25 14:04:02.806679] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:53.775 [2024-07-25 14:04:02.822669] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:53.775 [2024-07-25 14:04:02.830439] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.035 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.035 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:54.035 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:54.035 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:54.035 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:54.035 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.035 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=72891 00:15:54.035 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 72891 /var/tmp/bdevperf.sock 00:15:54.035 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72891 ']' 00:15:54.035 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:54.035 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:15:54.035 "subsystems": [ 00:15:54.035 { 00:15:54.035 "subsystem": "keyring", 00:15:54.035 "config": [] 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "subsystem": "iobuf", 00:15:54.035 "config": [ 00:15:54.035 { 00:15:54.035 "method": "iobuf_set_options", 00:15:54.035 "params": { 00:15:54.035 "small_pool_count": 8192, 00:15:54.035 "large_pool_count": 1024, 00:15:54.035 "small_bufsize": 8192, 00:15:54.035 "large_bufsize": 135168 00:15:54.035 } 00:15:54.035 } 00:15:54.035 ] 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "subsystem": "sock", 00:15:54.035 "config": [ 00:15:54.035 { 00:15:54.035 "method": "sock_set_default_impl", 00:15:54.035 "params": { 00:15:54.035 "impl_name": "uring" 00:15:54.035 } 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "method": "sock_impl_set_options", 00:15:54.035 "params": { 00:15:54.035 "impl_name": "ssl", 00:15:54.035 "recv_buf_size": 4096, 00:15:54.035 "send_buf_size": 4096, 00:15:54.035 "enable_recv_pipe": true, 00:15:54.035 "enable_quickack": false, 00:15:54.035 "enable_placement_id": 0, 00:15:54.035 "enable_zerocopy_send_server": true, 00:15:54.035 "enable_zerocopy_send_client": false, 00:15:54.035 "zerocopy_threshold": 0, 00:15:54.035 "tls_version": 0, 00:15:54.035 "enable_ktls": false 00:15:54.035 } 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "method": "sock_impl_set_options", 00:15:54.035 "params": { 00:15:54.035 "impl_name": "posix", 00:15:54.035 "recv_buf_size": 2097152, 00:15:54.035 "send_buf_size": 2097152, 00:15:54.035 "enable_recv_pipe": true, 00:15:54.035 "enable_quickack": false, 00:15:54.035 "enable_placement_id": 0, 00:15:54.035 "enable_zerocopy_send_server": true, 00:15:54.035 "enable_zerocopy_send_client": false, 00:15:54.035 "zerocopy_threshold": 0, 00:15:54.035 "tls_version": 0, 00:15:54.035 "enable_ktls": false 00:15:54.035 } 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "method": "sock_impl_set_options", 00:15:54.035 "params": { 00:15:54.035 "impl_name": "uring", 00:15:54.035 "recv_buf_size": 2097152, 00:15:54.035 "send_buf_size": 2097152, 00:15:54.035 "enable_recv_pipe": true, 00:15:54.035 "enable_quickack": false, 00:15:54.035 "enable_placement_id": 0, 00:15:54.035 "enable_zerocopy_send_server": false, 00:15:54.035 "enable_zerocopy_send_client": false, 00:15:54.035 "zerocopy_threshold": 0, 00:15:54.035 "tls_version": 0, 00:15:54.035 "enable_ktls": false 00:15:54.035 } 00:15:54.035 } 00:15:54.035 ] 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "subsystem": "vmd", 00:15:54.035 "config": [] 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "subsystem": "accel", 00:15:54.035 "config": [ 00:15:54.035 { 00:15:54.035 "method": "accel_set_options", 00:15:54.035 "params": { 00:15:54.035 "small_cache_size": 128, 00:15:54.035 "large_cache_size": 16, 00:15:54.035 "task_count": 2048, 00:15:54.035 "sequence_count": 2048, 00:15:54.035 "buf_count": 2048 00:15:54.035 } 00:15:54.035 } 00:15:54.035 ] 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "subsystem": "bdev", 00:15:54.035 "config": [ 00:15:54.035 { 00:15:54.035 "method": "bdev_set_options", 00:15:54.035 "params": { 00:15:54.035 "bdev_io_pool_size": 65535, 00:15:54.035 "bdev_io_cache_size": 256, 00:15:54.035 "bdev_auto_examine": true, 00:15:54.035 "iobuf_small_cache_size": 128, 00:15:54.035 "iobuf_large_cache_size": 16 00:15:54.035 } 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "method": "bdev_raid_set_options", 00:15:54.035 "params": { 00:15:54.035 "process_window_size_kb": 1024, 00:15:54.035 "process_max_bandwidth_mb_sec": 0 00:15:54.035 } 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "method": "bdev_iscsi_set_options", 00:15:54.035 "params": { 00:15:54.035 "timeout_sec": 30 00:15:54.035 } 00:15:54.036 }, 00:15:54.036 { 00:15:54.036 "method": "bdev_nvme_set_options", 00:15:54.036 "params": { 00:15:54.036 "action_on_timeout": "none", 00:15:54.036 "timeout_us": 0, 00:15:54.036 "timeout_admin_us": 0, 00:15:54.036 "keep_alive_timeout_ms": 10000, 00:15:54.036 "arbitration_burst": 0, 00:15:54.036 "low_priority_weight": 0, 00:15:54.036 "medium_priority_weight": 0, 00:15:54.036 "high_priority_weight": 0, 00:15:54.036 "nvme_adminq_poll_period_us": 10000, 00:15:54.036 "nvme_ioq_poll_period_us": 0, 00:15:54.036 "io_queue_requests": 512, 00:15:54.036 "delay_cmd_submit": true, 00:15:54.036 "transport_retry_count": 4, 00:15:54.036 "bdev_retry_count": 3, 00:15:54.036 "transport_ack_timeout": 0, 00:15:54.036 "ctrlr_loss_timeout_sec": 0, 00:15:54.036 "reconnect_delay_sec": 0, 00:15:54.036 "fast_io_fail_timeout_sec": 0, 00:15:54.036 "disable_a 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:54.036 uto_failback": false, 00:15:54.036 "generate_uuids": false, 00:15:54.036 "transport_tos": 0, 00:15:54.036 "nvme_error_stat": false, 00:15:54.036 "rdma_srq_size": 0, 00:15:54.036 "io_path_stat": false, 00:15:54.036 "allow_accel_sequence": false, 00:15:54.036 "rdma_max_cq_size": 0, 00:15:54.036 "rdma_cm_event_timeout_ms": 0, 00:15:54.036 "dhchap_digests": [ 00:15:54.036 "sha256", 00:15:54.036 "sha384", 00:15:54.036 "sha512" 00:15:54.036 ], 00:15:54.036 "dhchap_dhgroups": [ 00:15:54.036 "null", 00:15:54.036 "ffdhe2048", 00:15:54.036 "ffdhe3072", 00:15:54.036 "ffdhe4096", 00:15:54.036 "ffdhe6144", 00:15:54.036 "ffdhe8192" 00:15:54.036 ] 00:15:54.036 } 00:15:54.036 }, 00:15:54.036 { 00:15:54.036 "method": "bdev_nvme_attach_controller", 00:15:54.036 "params": { 00:15:54.036 "name": "TLSTEST", 00:15:54.036 "trtype": "TCP", 00:15:54.036 "adrfam": "IPv4", 00:15:54.036 "traddr": "10.0.0.2", 00:15:54.036 "trsvcid": "4420", 00:15:54.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.036 "prchk_reftag": false, 00:15:54.036 "prchk_guard": false, 00:15:54.036 "ctrlr_loss_timeout_sec": 0, 00:15:54.036 "reconnect_delay_sec": 0, 00:15:54.036 "fast_io_fail_timeout_sec": 0, 00:15:54.036 "psk": "/tmp/tmp.xCUlETHsE5", 00:15:54.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:54.036 "hdgst": false, 00:15:54.036 "ddgst": false 00:15:54.036 } 00:15:54.036 }, 00:15:54.036 { 00:15:54.036 "method": "bdev_nvme_set_hotplug", 00:15:54.036 "params": { 00:15:54.036 "period_us": 100000, 00:15:54.036 "enable": false 00:15:54.036 } 00:15:54.036 }, 00:15:54.036 { 00:15:54.036 "method": "bdev_wait_for_examine" 00:15:54.036 } 00:15:54.036 ] 00:15:54.036 }, 00:15:54.036 { 00:15:54.036 "subsystem": "nbd", 00:15:54.036 "config": [] 00:15:54.036 } 00:15:54.036 ] 00:15:54.036 }' 00:15:54.036 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.036 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:54.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:54.036 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.036 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:54.036 [2024-07-25 14:04:03.313810] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:15:54.036 [2024-07-25 14:04:03.313969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72891 ] 00:15:54.297 [2024-07-25 14:04:03.451622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.297 [2024-07-25 14:04:03.554650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.557 [2024-07-25 14:04:03.678165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:54.557 [2024-07-25 14:04:03.710809] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:54.557 [2024-07-25 14:04:03.711009] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:55.125 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.125 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:55.125 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:55.125 Running I/O for 10 seconds... 00:16:05.107 00:16:05.107 Latency(us) 00:16:05.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.107 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:05.107 Verification LBA range: start 0x0 length 0x2000 00:16:05.107 TLSTESTn1 : 10.01 5925.84 23.15 0.00 0.00 21564.49 4464.46 18201.26 00:16:05.107 =================================================================================================================== 00:16:05.107 Total : 5925.84 23.15 0.00 0.00 21564.49 4464.46 18201.26 00:16:05.107 0 00:16:05.107 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:05.107 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 72891 00:16:05.107 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72891 ']' 00:16:05.107 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72891 00:16:05.107 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:05.107 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.107 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72891 00:16:05.107 killing process with pid 72891 00:16:05.107 Received shutdown signal, test time was about 10.000000 seconds 00:16:05.107 00:16:05.107 Latency(us) 00:16:05.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.107 =================================================================================================================== 00:16:05.107 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.107 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:05.107 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:05.107 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72891' 00:16:05.107 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72891 00:16:05.107 [2024-07-25 14:04:14.372565] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:05.107 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72891 00:16:05.366 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 72864 00:16:05.366 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72864 ']' 00:16:05.366 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72864 00:16:05.366 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:05.366 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.366 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72864 00:16:05.366 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:05.366 killing process with pid 72864 00:16:05.366 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:05.366 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72864' 00:16:05.366 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72864 00:16:05.366 [2024-07-25 14:04:14.601496] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:05.366 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72864 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73030 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73030 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73030 ']' 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.625 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.625 [2024-07-25 14:04:14.857662] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:05.625 [2024-07-25 14:04:14.857733] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.883 [2024-07-25 14:04:14.994903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.883 [2024-07-25 14:04:15.095682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.883 [2024-07-25 14:04:15.095823] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.883 [2024-07-25 14:04:15.095872] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.883 [2024-07-25 14:04:15.095897] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.883 [2024-07-25 14:04:15.095913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.883 [2024-07-25 14:04:15.095950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.883 [2024-07-25 14:04:15.136375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:06.483 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.483 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:06.483 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.483 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:06.483 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.483 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.483 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.xCUlETHsE5 00:16:06.483 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xCUlETHsE5 00:16:06.483 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:06.741 [2024-07-25 14:04:15.951238] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.741 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:07.000 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:07.259 [2024-07-25 14:04:16.334600] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:07.259 [2024-07-25 14:04:16.334787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.259 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:07.259 malloc0 00:16:07.259 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:07.517 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xCUlETHsE5 00:16:07.776 [2024-07-25 14:04:16.974321] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:07.776 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:07.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:07.776 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73079 00:16:07.776 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:07.776 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73079 /var/tmp/bdevperf.sock 00:16:07.776 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73079 ']' 00:16:07.776 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:07.776 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.776 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:07.776 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.776 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:07.776 [2024-07-25 14:04:17.053018] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:07.776 [2024-07-25 14:04:17.053204] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73079 ] 00:16:08.034 [2024-07-25 14:04:17.196369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.034 [2024-07-25 14:04:17.300388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.292 [2024-07-25 14:04:17.341662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:08.859 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.859 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:08.859 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xCUlETHsE5 00:16:08.859 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:09.117 [2024-07-25 14:04:18.343547] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:09.117 nvme0n1 00:16:09.376 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:09.376 Running I/O for 1 seconds... 00:16:10.314 00:16:10.314 Latency(us) 00:16:10.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.314 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:10.314 Verification LBA range: start 0x0 length 0x2000 00:16:10.314 nvme0n1 : 1.01 6188.65 24.17 0.00 0.00 20551.49 3863.48 17857.84 00:16:10.314 =================================================================================================================== 00:16:10.314 Total : 6188.65 24.17 0.00 0.00 20551.49 3863.48 17857.84 00:16:10.314 0 00:16:10.314 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 73079 00:16:10.314 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73079 ']' 00:16:10.314 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73079 00:16:10.314 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:10.314 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:10.314 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73079 00:16:10.314 killing process with pid 73079 00:16:10.314 Received shutdown signal, test time was about 1.000000 seconds 00:16:10.314 00:16:10.314 Latency(us) 00:16:10.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.314 =================================================================================================================== 00:16:10.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:10.314 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:10.314 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:10.314 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73079' 00:16:10.314 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73079 00:16:10.314 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73079 00:16:10.574 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 73030 00:16:10.574 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73030 ']' 00:16:10.574 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73030 00:16:10.574 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:10.574 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:10.574 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73030 00:16:10.574 killing process with pid 73030 00:16:10.574 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:10.574 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:10.574 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73030' 00:16:10.574 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73030 00:16:10.574 [2024-07-25 14:04:19.831061] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:10.574 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73030 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73130 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73130 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73130 ']' 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:10.834 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.834 [2024-07-25 14:04:20.092562] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:10.834 [2024-07-25 14:04:20.092734] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.095 [2024-07-25 14:04:20.230070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.095 [2024-07-25 14:04:20.330877] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.095 [2024-07-25 14:04:20.331020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.095 [2024-07-25 14:04:20.331054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.095 [2024-07-25 14:04:20.331080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.095 [2024-07-25 14:04:20.331095] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.095 [2024-07-25 14:04:20.331133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.095 [2024-07-25 14:04:20.373080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:11.698 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:11.698 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:11.698 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:11.698 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:11.698 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.957 [2024-07-25 14:04:21.012117] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.957 malloc0 00:16:11.957 [2024-07-25 14:04:21.040838] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:11.957 [2024-07-25 14:04:21.041006] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=73162 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 73162 /var/tmp/bdevperf.sock 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73162 ']' 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:11.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:11.957 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.957 [2024-07-25 14:04:21.120324] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:11.957 [2024-07-25 14:04:21.120486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73162 ] 00:16:11.958 [2024-07-25 14:04:21.256550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.216 [2024-07-25 14:04:21.358687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.216 [2024-07-25 14:04:21.400625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:12.782 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.782 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:12.782 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xCUlETHsE5 00:16:13.039 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:13.297 [2024-07-25 14:04:22.395802] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:13.297 nvme0n1 00:16:13.297 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:13.297 Running I/O for 1 seconds... 00:16:14.669 00:16:14.669 Latency(us) 00:16:14.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.669 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:14.669 Verification LBA range: start 0x0 length 0x2000 00:16:14.669 nvme0n1 : 1.01 5713.08 22.32 0.00 0.00 22233.18 4836.50 17628.90 00:16:14.669 =================================================================================================================== 00:16:14.669 Total : 5713.08 22.32 0.00 0.00 22233.18 4836.50 17628.90 00:16:14.669 0 00:16:14.669 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:16:14.669 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.669 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:14.669 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.669 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:16:14.669 "subsystems": [ 00:16:14.669 { 00:16:14.669 "subsystem": "keyring", 00:16:14.669 "config": [ 00:16:14.669 { 00:16:14.669 "method": "keyring_file_add_key", 00:16:14.669 "params": { 00:16:14.669 "name": "key0", 00:16:14.669 "path": "/tmp/tmp.xCUlETHsE5" 00:16:14.669 } 00:16:14.669 } 00:16:14.669 ] 00:16:14.669 }, 00:16:14.669 { 00:16:14.669 "subsystem": "iobuf", 00:16:14.669 "config": [ 00:16:14.669 { 00:16:14.669 "method": "iobuf_set_options", 00:16:14.669 "params": { 00:16:14.669 "small_pool_count": 8192, 00:16:14.669 "large_pool_count": 1024, 00:16:14.669 "small_bufsize": 8192, 00:16:14.669 "large_bufsize": 135168 00:16:14.669 } 00:16:14.669 } 00:16:14.669 ] 00:16:14.669 }, 00:16:14.669 { 00:16:14.669 "subsystem": "sock", 00:16:14.669 "config": [ 00:16:14.669 { 00:16:14.669 "method": "sock_set_default_impl", 00:16:14.669 "params": { 00:16:14.669 "impl_name": "uring" 00:16:14.669 } 00:16:14.669 }, 00:16:14.669 { 00:16:14.669 "method": "sock_impl_set_options", 00:16:14.669 "params": { 00:16:14.669 "impl_name": "ssl", 00:16:14.669 "recv_buf_size": 4096, 00:16:14.669 "send_buf_size": 4096, 00:16:14.669 "enable_recv_pipe": true, 00:16:14.669 "enable_quickack": false, 00:16:14.669 "enable_placement_id": 0, 00:16:14.669 "enable_zerocopy_send_server": true, 00:16:14.669 "enable_zerocopy_send_client": false, 00:16:14.669 "zerocopy_threshold": 0, 00:16:14.669 "tls_version": 0, 00:16:14.669 "enable_ktls": false 00:16:14.669 } 00:16:14.669 }, 00:16:14.669 { 00:16:14.669 "method": "sock_impl_set_options", 00:16:14.669 "params": { 00:16:14.669 "impl_name": "posix", 00:16:14.669 "recv_buf_size": 2097152, 00:16:14.669 "send_buf_size": 2097152, 00:16:14.669 "enable_recv_pipe": true, 00:16:14.669 "enable_quickack": false, 00:16:14.669 "enable_placement_id": 0, 00:16:14.669 "enable_zerocopy_send_server": true, 00:16:14.669 "enable_zerocopy_send_client": false, 00:16:14.669 "zerocopy_threshold": 0, 00:16:14.669 "tls_version": 0, 00:16:14.669 "enable_ktls": false 00:16:14.669 } 00:16:14.669 }, 00:16:14.669 { 00:16:14.669 "method": "sock_impl_set_options", 00:16:14.669 "params": { 00:16:14.669 "impl_name": "uring", 00:16:14.669 "recv_buf_size": 2097152, 00:16:14.669 "send_buf_size": 2097152, 00:16:14.669 "enable_recv_pipe": true, 00:16:14.669 "enable_quickack": false, 00:16:14.669 "enable_placement_id": 0, 00:16:14.669 "enable_zerocopy_send_server": false, 00:16:14.669 "enable_zerocopy_send_client": false, 00:16:14.669 "zerocopy_threshold": 0, 00:16:14.669 "tls_version": 0, 00:16:14.669 "enable_ktls": false 00:16:14.669 } 00:16:14.669 } 00:16:14.669 ] 00:16:14.669 }, 00:16:14.669 { 00:16:14.669 "subsystem": "vmd", 00:16:14.669 "config": [] 00:16:14.669 }, 00:16:14.669 { 00:16:14.669 "subsystem": "accel", 00:16:14.669 "config": [ 00:16:14.669 { 00:16:14.669 "method": "accel_set_options", 00:16:14.669 "params": { 00:16:14.669 "small_cache_size": 128, 00:16:14.669 "large_cache_size": 16, 00:16:14.669 "task_count": 2048, 00:16:14.669 "sequence_count": 2048, 00:16:14.669 "buf_count": 2048 00:16:14.669 } 00:16:14.669 } 00:16:14.669 ] 00:16:14.669 }, 00:16:14.669 { 00:16:14.669 "subsystem": "bdev", 00:16:14.669 "config": [ 00:16:14.669 { 00:16:14.669 "method": "bdev_set_options", 00:16:14.669 "params": { 00:16:14.669 "bdev_io_pool_size": 65535, 00:16:14.669 "bdev_io_cache_size": 256, 00:16:14.669 "bdev_auto_examine": true, 00:16:14.669 "iobuf_small_cache_size": 128, 00:16:14.669 "iobuf_large_cache_size": 16 00:16:14.669 } 00:16:14.669 }, 00:16:14.669 { 00:16:14.669 "method": "bdev_raid_set_options", 00:16:14.669 "params": { 00:16:14.669 "process_window_size_kb": 1024, 00:16:14.669 "process_max_bandwidth_mb_sec": 0 00:16:14.669 } 00:16:14.669 }, 00:16:14.669 { 00:16:14.669 "method": "bdev_iscsi_set_options", 00:16:14.669 "params": { 00:16:14.669 "timeout_sec": 30 00:16:14.669 } 00:16:14.669 }, 00:16:14.669 { 00:16:14.669 "method": "bdev_nvme_set_options", 00:16:14.669 "params": { 00:16:14.669 "action_on_timeout": "none", 00:16:14.670 "timeout_us": 0, 00:16:14.670 "timeout_admin_us": 0, 00:16:14.670 "keep_alive_timeout_ms": 10000, 00:16:14.670 "arbitration_burst": 0, 00:16:14.670 "low_priority_weight": 0, 00:16:14.670 "medium_priority_weight": 0, 00:16:14.670 "high_priority_weight": 0, 00:16:14.670 "nvme_adminq_poll_period_us": 10000, 00:16:14.670 "nvme_ioq_poll_period_us": 0, 00:16:14.670 "io_queue_requests": 0, 00:16:14.670 "delay_cmd_submit": true, 00:16:14.670 "transport_retry_count": 4, 00:16:14.670 "bdev_retry_count": 3, 00:16:14.670 "transport_ack_timeout": 0, 00:16:14.670 "ctrlr_loss_timeout_sec": 0, 00:16:14.670 "reconnect_delay_sec": 0, 00:16:14.670 "fast_io_fail_timeout_sec": 0, 00:16:14.670 "disable_auto_failback": false, 00:16:14.670 "generate_uuids": false, 00:16:14.670 "transport_tos": 0, 00:16:14.670 "nvme_error_stat": false, 00:16:14.670 "rdma_srq_size": 0, 00:16:14.670 "io_path_stat": false, 00:16:14.670 "allow_accel_sequence": false, 00:16:14.670 "rdma_max_cq_size": 0, 00:16:14.670 "rdma_cm_event_timeout_ms": 0, 00:16:14.670 "dhchap_digests": [ 00:16:14.670 "sha256", 00:16:14.670 "sha384", 00:16:14.670 "sha512" 00:16:14.670 ], 00:16:14.670 "dhchap_dhgroups": [ 00:16:14.670 "null", 00:16:14.670 "ffdhe2048", 00:16:14.670 "ffdhe3072", 00:16:14.670 "ffdhe4096", 00:16:14.670 "ffdhe6144", 00:16:14.670 "ffdhe8192" 00:16:14.670 ] 00:16:14.670 } 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "method": "bdev_nvme_set_hotplug", 00:16:14.670 "params": { 00:16:14.670 "period_us": 100000, 00:16:14.670 "enable": false 00:16:14.670 } 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "method": "bdev_malloc_create", 00:16:14.670 "params": { 00:16:14.670 "name": "malloc0", 00:16:14.670 "num_blocks": 8192, 00:16:14.670 "block_size": 4096, 00:16:14.670 "physical_block_size": 4096, 00:16:14.670 "uuid": "52e79c0d-1ad0-4b77-9b8d-8765570a2c63", 00:16:14.670 "optimal_io_boundary": 0, 00:16:14.670 "md_size": 0, 00:16:14.670 "dif_type": 0, 00:16:14.670 "dif_is_head_of_md": false, 00:16:14.670 "dif_pi_format": 0 00:16:14.670 } 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "method": "bdev_wait_for_examine" 00:16:14.670 } 00:16:14.670 ] 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "subsystem": "nbd", 00:16:14.670 "config": [] 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "subsystem": "scheduler", 00:16:14.670 "config": [ 00:16:14.670 { 00:16:14.670 "method": "framework_set_scheduler", 00:16:14.670 "params": { 00:16:14.670 "name": "static" 00:16:14.670 } 00:16:14.670 } 00:16:14.670 ] 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "subsystem": "nvmf", 00:16:14.670 "config": [ 00:16:14.670 { 00:16:14.670 "method": "nvmf_set_config", 00:16:14.670 "params": { 00:16:14.670 "discovery_filter": "match_any", 00:16:14.670 "admin_cmd_passthru": { 00:16:14.670 "identify_ctrlr": false 00:16:14.670 } 00:16:14.670 } 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "method": "nvmf_set_max_subsystems", 00:16:14.670 "params": { 00:16:14.670 "max_subsystems": 1024 00:16:14.670 } 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "method": "nvmf_set_crdt", 00:16:14.670 "params": { 00:16:14.670 "crdt1": 0, 00:16:14.670 "crdt2": 0, 00:16:14.670 "crdt3": 0 00:16:14.670 } 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "method": "nvmf_create_transport", 00:16:14.670 "params": { 00:16:14.670 "trtype": "TCP", 00:16:14.670 "max_queue_depth": 128, 00:16:14.670 "max_io_qpairs_per_ctrlr": 127, 00:16:14.670 "in_capsule_data_size": 4096, 00:16:14.670 "max_io_size": 131072, 00:16:14.670 "io_unit_size": 131072, 00:16:14.670 "max_aq_depth": 128, 00:16:14.670 "num_shared_buffers": 511, 00:16:14.670 "buf_cache_size": 4294967295, 00:16:14.670 "dif_insert_or_strip": false, 00:16:14.670 "zcopy": false, 00:16:14.670 "c2h_success": false, 00:16:14.670 "sock_priority": 0, 00:16:14.670 "abort_timeout_sec": 1, 00:16:14.670 "ack_timeout": 0, 00:16:14.670 "data_wr_pool_size": 0 00:16:14.670 } 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "method": "nvmf_create_subsystem", 00:16:14.670 "params": { 00:16:14.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.670 "allow_any_host": false, 00:16:14.670 "serial_number": "00000000000000000000", 00:16:14.670 "model_number": "SPDK bdev Controller", 00:16:14.670 "max_namespaces": 32, 00:16:14.670 "min_cntlid": 1, 00:16:14.670 "max_cntlid": 65519, 00:16:14.670 "ana_reporting": false 00:16:14.670 } 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "method": "nvmf_subsystem_add_host", 00:16:14.670 "params": { 00:16:14.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.670 "host": "nqn.2016-06.io.spdk:host1", 00:16:14.670 "psk": "key0" 00:16:14.670 } 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "method": "nvmf_subsystem_add_ns", 00:16:14.670 "params": { 00:16:14.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.670 "namespace": { 00:16:14.670 "nsid": 1, 00:16:14.670 "bdev_name": "malloc0", 00:16:14.670 "nguid": "52E79C0D1AD04B779B8D8765570A2C63", 00:16:14.670 "uuid": "52e79c0d-1ad0-4b77-9b8d-8765570a2c63", 00:16:14.670 "no_auto_visible": false 00:16:14.670 } 00:16:14.670 } 00:16:14.670 }, 00:16:14.670 { 00:16:14.670 "method": "nvmf_subsystem_add_listener", 00:16:14.670 "params": { 00:16:14.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.670 "listen_address": { 00:16:14.670 "trtype": "TCP", 00:16:14.670 "adrfam": "IPv4", 00:16:14.670 "traddr": "10.0.0.2", 00:16:14.670 "trsvcid": "4420" 00:16:14.670 }, 00:16:14.670 "secure_channel": false, 00:16:14.670 "sock_impl": "ssl" 00:16:14.670 } 00:16:14.670 } 00:16:14.670 ] 00:16:14.670 } 00:16:14.670 ] 00:16:14.670 }' 00:16:14.670 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:14.928 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:16:14.928 "subsystems": [ 00:16:14.928 { 00:16:14.928 "subsystem": "keyring", 00:16:14.928 "config": [ 00:16:14.928 { 00:16:14.928 "method": "keyring_file_add_key", 00:16:14.928 "params": { 00:16:14.928 "name": "key0", 00:16:14.928 "path": "/tmp/tmp.xCUlETHsE5" 00:16:14.928 } 00:16:14.928 } 00:16:14.928 ] 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "subsystem": "iobuf", 00:16:14.928 "config": [ 00:16:14.928 { 00:16:14.928 "method": "iobuf_set_options", 00:16:14.928 "params": { 00:16:14.928 "small_pool_count": 8192, 00:16:14.928 "large_pool_count": 1024, 00:16:14.928 "small_bufsize": 8192, 00:16:14.928 "large_bufsize": 135168 00:16:14.928 } 00:16:14.928 } 00:16:14.928 ] 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "subsystem": "sock", 00:16:14.928 "config": [ 00:16:14.928 { 00:16:14.928 "method": "sock_set_default_impl", 00:16:14.928 "params": { 00:16:14.928 "impl_name": "uring" 00:16:14.928 } 00:16:14.928 }, 00:16:14.928 { 00:16:14.929 "method": "sock_impl_set_options", 00:16:14.929 "params": { 00:16:14.929 "impl_name": "ssl", 00:16:14.929 "recv_buf_size": 4096, 00:16:14.929 "send_buf_size": 4096, 00:16:14.929 "enable_recv_pipe": true, 00:16:14.929 "enable_quickack": false, 00:16:14.929 "enable_placement_id": 0, 00:16:14.929 "enable_zerocopy_send_server": true, 00:16:14.929 "enable_zerocopy_send_client": false, 00:16:14.929 "zerocopy_threshold": 0, 00:16:14.929 "tls_version": 0, 00:16:14.929 "enable_ktls": false 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "sock_impl_set_options", 00:16:14.929 "params": { 00:16:14.929 "impl_name": "posix", 00:16:14.929 "recv_buf_size": 2097152, 00:16:14.929 "send_buf_size": 2097152, 00:16:14.929 "enable_recv_pipe": true, 00:16:14.929 "enable_quickack": false, 00:16:14.929 "enable_placement_id": 0, 00:16:14.929 "enable_zerocopy_send_server": true, 00:16:14.929 "enable_zerocopy_send_client": false, 00:16:14.929 "zerocopy_threshold": 0, 00:16:14.929 "tls_version": 0, 00:16:14.929 "enable_ktls": false 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "sock_impl_set_options", 00:16:14.929 "params": { 00:16:14.929 "impl_name": "uring", 00:16:14.929 "recv_buf_size": 2097152, 00:16:14.929 "send_buf_size": 2097152, 00:16:14.929 "enable_recv_pipe": true, 00:16:14.929 "enable_quickack": false, 00:16:14.929 "enable_placement_id": 0, 00:16:14.929 "enable_zerocopy_send_server": false, 00:16:14.929 "enable_zerocopy_send_client": false, 00:16:14.929 "zerocopy_threshold": 0, 00:16:14.929 "tls_version": 0, 00:16:14.929 "enable_ktls": false 00:16:14.929 } 00:16:14.929 } 00:16:14.929 ] 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "subsystem": "vmd", 00:16:14.929 "config": [] 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "subsystem": "accel", 00:16:14.929 "config": [ 00:16:14.929 { 00:16:14.929 "method": "accel_set_options", 00:16:14.929 "params": { 00:16:14.929 "small_cache_size": 128, 00:16:14.929 "large_cache_size": 16, 00:16:14.929 "task_count": 2048, 00:16:14.929 "sequence_count": 2048, 00:16:14.929 "buf_count": 2048 00:16:14.929 } 00:16:14.929 } 00:16:14.929 ] 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "subsystem": "bdev", 00:16:14.929 "config": [ 00:16:14.929 { 00:16:14.929 "method": "bdev_set_options", 00:16:14.929 "params": { 00:16:14.929 "bdev_io_pool_size": 65535, 00:16:14.929 "bdev_io_cache_size": 256, 00:16:14.929 "bdev_auto_examine": true, 00:16:14.929 "iobuf_small_cache_size": 128, 00:16:14.929 "iobuf_large_cache_size": 16 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "bdev_raid_set_options", 00:16:14.929 "params": { 00:16:14.929 "process_window_size_kb": 1024, 00:16:14.929 "process_max_bandwidth_mb_sec": 0 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "bdev_iscsi_set_options", 00:16:14.929 "params": { 00:16:14.929 "timeout_sec": 30 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "bdev_nvme_set_options", 00:16:14.929 "params": { 00:16:14.929 "action_on_timeout": "none", 00:16:14.929 "timeout_us": 0, 00:16:14.929 "timeout_admin_us": 0, 00:16:14.929 "keep_alive_timeout_ms": 10000, 00:16:14.929 "arbitration_burst": 0, 00:16:14.929 "low_priority_weight": 0, 00:16:14.929 "medium_priority_weight": 0, 00:16:14.929 "high_priority_weight": 0, 00:16:14.929 "nvme_adminq_poll_period_us": 10000, 00:16:14.929 "nvme_ioq_poll_period_us": 0, 00:16:14.929 "io_queue_requests": 512, 00:16:14.929 "delay_cmd_submit": true, 00:16:14.929 "transport_retry_count": 4, 00:16:14.929 "bdev_retry_count": 3, 00:16:14.929 "transport_ack_timeout": 0, 00:16:14.929 "ctrlr_loss_timeout_sec": 0, 00:16:14.929 "reconnect_delay_sec": 0, 00:16:14.929 "fast_io_fail_timeout_sec": 0, 00:16:14.929 "disable_auto_failback": false, 00:16:14.929 "generate_uuids": false, 00:16:14.929 "transport_tos": 0, 00:16:14.929 "nvme_error_stat": false, 00:16:14.929 "rdma_srq_size": 0, 00:16:14.929 "io_path_stat": false, 00:16:14.929 "allow_accel_sequence": false, 00:16:14.929 "rdma_max_cq_size": 0, 00:16:14.929 "rdma_cm_event_timeout_ms": 0, 00:16:14.929 "dhchap_digests": [ 00:16:14.929 "sha256", 00:16:14.929 "sha384", 00:16:14.929 "sha512" 00:16:14.929 ], 00:16:14.929 "dhchap_dhgroups": [ 00:16:14.929 "null", 00:16:14.929 "ffdhe2048", 00:16:14.929 "ffdhe3072", 00:16:14.929 "ffdhe4096", 00:16:14.929 "ffdhe6144", 00:16:14.929 "ffdhe8192" 00:16:14.929 ] 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "bdev_nvme_attach_controller", 00:16:14.929 "params": { 00:16:14.929 "name": "nvme0", 00:16:14.929 "trtype": "TCP", 00:16:14.929 "adrfam": "IPv4", 00:16:14.929 "traddr": "10.0.0.2", 00:16:14.929 "trsvcid": "4420", 00:16:14.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.929 "prchk_reftag": false, 00:16:14.929 "prchk_guard": false, 00:16:14.929 "ctrlr_loss_timeout_sec": 0, 00:16:14.929 "reconnect_delay_sec": 0, 00:16:14.929 "fast_io_fail_timeout_sec": 0, 00:16:14.929 "psk": "key0", 00:16:14.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:14.929 "hdgst": false, 00:16:14.929 "ddgst": false 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "bdev_nvme_set_hotplug", 00:16:14.929 "params": { 00:16:14.929 "period_us": 100000, 00:16:14.929 "enable": false 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "bdev_enable_histogram", 00:16:14.929 "params": { 00:16:14.929 "name": "nvme0n1", 00:16:14.929 "enable": true 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "bdev_wait_for_examine" 00:16:14.929 } 00:16:14.929 ] 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "subsystem": "nbd", 00:16:14.929 "config": [] 00:16:14.929 } 00:16:14.929 ] 00:16:14.929 }' 00:16:14.929 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 73162 00:16:14.929 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73162 ']' 00:16:14.929 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73162 00:16:14.929 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:14.929 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.929 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73162 00:16:14.929 killing process with pid 73162 00:16:14.929 Received shutdown signal, test time was about 1.000000 seconds 00:16:14.929 00:16:14.929 Latency(us) 00:16:14.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.929 =================================================================================================================== 00:16:14.929 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.929 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:14.929 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:14.929 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73162' 00:16:14.929 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73162 00:16:14.929 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73162 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 73130 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73130 ']' 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73130 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73130 00:16:15.187 killing process with pid 73130 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73130' 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73130 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73130 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.187 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:16:15.187 "subsystems": [ 00:16:15.187 { 00:16:15.187 "subsystem": "keyring", 00:16:15.187 "config": [ 00:16:15.187 { 00:16:15.187 "method": "keyring_file_add_key", 00:16:15.187 "params": { 00:16:15.187 "name": "key0", 00:16:15.187 "path": "/tmp/tmp.xCUlETHsE5" 00:16:15.187 } 00:16:15.187 } 00:16:15.187 ] 00:16:15.187 }, 00:16:15.187 { 00:16:15.187 "subsystem": "iobuf", 00:16:15.187 "config": [ 00:16:15.187 { 00:16:15.187 "method": "iobuf_set_options", 00:16:15.187 "params": { 00:16:15.187 "small_pool_count": 8192, 00:16:15.187 "large_pool_count": 1024, 00:16:15.187 "small_bufsize": 8192, 00:16:15.187 "large_bufsize": 135168 00:16:15.187 } 00:16:15.187 } 00:16:15.187 ] 00:16:15.187 }, 00:16:15.187 { 00:16:15.187 "subsystem": "sock", 00:16:15.187 "config": [ 00:16:15.187 { 00:16:15.187 "method": "sock_set_default_impl", 00:16:15.187 "params": { 00:16:15.187 "impl_name": "uring" 00:16:15.187 } 00:16:15.187 }, 00:16:15.187 { 00:16:15.187 "method": "sock_impl_set_options", 00:16:15.187 "params": { 00:16:15.187 "impl_name": "ssl", 00:16:15.187 "recv_buf_size": 4096, 00:16:15.187 "send_buf_size": 4096, 00:16:15.187 "enable_recv_pipe": true, 00:16:15.187 "enable_quickack": false, 00:16:15.187 "enable_placement_id": 0, 00:16:15.187 "enable_zerocopy_send_server": true, 00:16:15.187 "enable_zerocopy_send_client": false, 00:16:15.187 "zerocopy_threshold": 0, 00:16:15.187 "tls_version": 0, 00:16:15.187 "enable_ktls": false 00:16:15.187 } 00:16:15.187 }, 00:16:15.187 { 00:16:15.187 "method": "sock_impl_set_options", 00:16:15.187 "params": { 00:16:15.187 "impl_name": "posix", 00:16:15.187 "recv_buf_size": 2097152, 00:16:15.187 "send_buf_size": 2097152, 00:16:15.187 "enable_recv_pipe": true, 00:16:15.187 "enable_quickack": false, 00:16:15.187 "enable_placement_id": 0, 00:16:15.187 "enable_zerocopy_send_server": true, 00:16:15.187 "enable_zerocopy_send_client": false, 00:16:15.187 "zerocopy_threshold": 0, 00:16:15.187 "tls_version": 0, 00:16:15.187 "enable_ktls": false 00:16:15.187 } 00:16:15.187 }, 00:16:15.187 { 00:16:15.187 "method": "sock_impl_set_options", 00:16:15.187 "params": { 00:16:15.187 "impl_name": "uring", 00:16:15.187 "recv_buf_size": 2097152, 00:16:15.187 "send_buf_size": 2097152, 00:16:15.187 "enable_recv_pipe": true, 00:16:15.187 "enable_quickack": false, 00:16:15.187 "enable_placement_id": 0, 00:16:15.187 "enable_zerocopy_send_server": false, 00:16:15.187 "enable_zerocopy_send_client": false, 00:16:15.187 "zerocopy_threshold": 0, 00:16:15.187 "tls_version": 0, 00:16:15.187 "enable_ktls": false 00:16:15.187 } 00:16:15.187 } 00:16:15.187 ] 00:16:15.187 }, 00:16:15.187 { 00:16:15.187 "subsystem": "vmd", 00:16:15.187 "config": [] 00:16:15.187 }, 00:16:15.187 { 00:16:15.187 "subsystem": "accel", 00:16:15.187 "config": [ 00:16:15.187 { 00:16:15.187 "method": "accel_set_options", 00:16:15.187 "params": { 00:16:15.187 "small_cache_size": 128, 00:16:15.187 "large_cache_size": 16, 00:16:15.187 "task_count": 2048, 00:16:15.187 "sequence_count": 2048, 00:16:15.187 "buf_count": 2048 00:16:15.187 } 00:16:15.187 } 00:16:15.187 ] 00:16:15.187 }, 00:16:15.187 { 00:16:15.187 "subsystem": "bdev", 00:16:15.187 "config": [ 00:16:15.187 { 00:16:15.187 "method": "bdev_set_options", 00:16:15.187 "params": { 00:16:15.187 "bdev_io_pool_size": 65535, 00:16:15.187 "bdev_io_cache_size": 256, 00:16:15.187 "bdev_auto_examine": true, 00:16:15.187 "iobuf_small_cache_size": 128, 00:16:15.187 "iobuf_large_cache_size": 16 00:16:15.187 } 00:16:15.187 }, 00:16:15.187 { 00:16:15.187 "method": "bdev_raid_set_options", 00:16:15.187 "params": { 00:16:15.187 "process_window_size_kb": 1024, 00:16:15.187 "process_max_bandwidth_mb_sec": 0 00:16:15.187 } 00:16:15.187 }, 00:16:15.187 { 00:16:15.187 "method": "bdev_iscsi_set_options", 00:16:15.187 "params": { 00:16:15.187 "timeout_sec": 30 00:16:15.187 } 00:16:15.187 }, 00:16:15.187 { 00:16:15.188 "method": "bdev_nvme_set_options", 00:16:15.188 "params": { 00:16:15.188 "action_on_timeout": "none", 00:16:15.188 "timeout_us": 0, 00:16:15.188 "timeout_admin_us": 0, 00:16:15.188 "keep_alive_timeout_ms": 10000, 00:16:15.188 "arbitration_burst": 0, 00:16:15.188 "low_priority_weight": 0, 00:16:15.188 "medium_priority_weight": 0, 00:16:15.188 "high_priority_weight": 0, 00:16:15.188 "nvme_adminq_poll_period_us": 10000, 00:16:15.188 "nvme_ioq_poll_period_us": 0, 00:16:15.188 "io_queue_requests": 0, 00:16:15.188 "delay_cmd_submit": true, 00:16:15.188 "transport_retry_count": 4, 00:16:15.188 "bdev_retry_count": 3, 00:16:15.188 "transport_ack_timeout": 0, 00:16:15.188 "ctrlr_loss_timeout_sec": 0, 00:16:15.188 "reconnect_delay_sec": 0, 00:16:15.188 "fast_io_fail_timeout_sec": 0, 00:16:15.188 "disable_auto_failback": false, 00:16:15.188 "generate_uuids": false, 00:16:15.188 "transport_tos": 0, 00:16:15.188 "nvme_error_stat": false, 00:16:15.188 "rdma_srq_size": 0, 00:16:15.188 "io_path_stat": false, 00:16:15.188 "allow_accel_sequence": false, 00:16:15.188 "rdma_max_cq_size": 0, 00:16:15.188 "rdma_cm_event_timeout_ms": 0, 00:16:15.188 "dhchap_digests": [ 00:16:15.188 "sha256", 00:16:15.188 "sha384", 00:16:15.188 "sha512" 00:16:15.188 ], 00:16:15.188 "dhchap_dhgroups": [ 00:16:15.188 "null", 00:16:15.188 "ffdhe2048", 00:16:15.188 "ffdhe3072", 00:16:15.188 "ffdhe4096", 00:16:15.188 "ffdhe6144", 00:16:15.188 "ffdhe8192" 00:16:15.188 ] 00:16:15.188 } 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "method": "bdev_nvme_set_hotplug", 00:16:15.188 "params": { 00:16:15.188 "period_us": 100000, 00:16:15.188 "enable": false 00:16:15.188 } 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "method": "bdev_malloc_create", 00:16:15.188 "params": { 00:16:15.188 "name": "malloc0", 00:16:15.188 "num_blocks": 8192, 00:16:15.188 "block_size": 4096, 00:16:15.188 "physical_block_size": 4096, 00:16:15.188 "uuid": "52e79c0d-1ad0-4b77-9b8d-8765570a2c63", 00:16:15.188 "optimal_io_boundary": 0, 00:16:15.188 "md_size": 0, 00:16:15.188 "dif_type": 0, 00:16:15.188 "dif_is_head_of_md": false, 00:16:15.188 "dif_pi_format": 0 00:16:15.188 } 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "method": "bdev_wait_for_examine" 00:16:15.188 } 00:16:15.188 ] 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "subsystem": "nbd", 00:16:15.188 "config": [] 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "subsystem": "scheduler", 00:16:15.188 "config": [ 00:16:15.188 { 00:16:15.188 "method": "framework_set_scheduler", 00:16:15.188 "params": { 00:16:15.188 "name": "static" 00:16:15.188 } 00:16:15.188 } 00:16:15.188 ] 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "subsystem": "nvmf", 00:16:15.188 "config": [ 00:16:15.188 { 00:16:15.188 "method": "nvmf_set_config", 00:16:15.188 "params": { 00:16:15.188 "discovery_filter": "match_any", 00:16:15.188 "admin_cmd_passthru": { 00:16:15.188 "identify_ctrlr": false 00:16:15.188 } 00:16:15.188 } 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "method": "nvmf_set_max_subsystems", 00:16:15.188 "params": { 00:16:15.188 "max_subsystems": 1024 00:16:15.188 } 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "method": "nvmf_set_crdt", 00:16:15.188 "params": { 00:16:15.188 "crdt1": 0, 00:16:15.188 "crdt2": 0, 00:16:15.188 "crdt3": 0 00:16:15.188 } 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "method": "nvmf_create_transport", 00:16:15.188 "params": { 00:16:15.188 "trtype": "TCP", 00:16:15.188 "max_queue_depth": 128, 00:16:15.188 "max_io_qpairs_per_ctrlr": 127, 00:16:15.188 "in_capsule_data_size": 4096, 00:16:15.188 "max_io_size": 131072, 00:16:15.188 "io_unit_size": 131072, 00:16:15.188 "max_aq_depth": 128, 00:16:15.188 "num_shared_buffers": 511, 00:16:15.188 "buf_cache_size": 4294967295, 00:16:15.188 "dif_insert_or_strip": false, 00:16:15.188 "zcopy": false, 00:16:15.188 "c2h_success": false, 00:16:15.188 "sock_priority": 0, 00:16:15.188 "abort_timeout_sec": 1, 00:16:15.188 "ack_timeout": 0, 00:16:15.188 "data_wr_pool_size": 0 00:16:15.188 } 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "method": "nvmf_create_subsystem", 00:16:15.188 "params": { 00:16:15.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.188 "allow_any_host": false, 00:16:15.188 "serial_number": "00000000000000000000", 00:16:15.188 "model_number": "SPDK bdev Controller", 00:16:15.188 "max_namespaces": 32, 00:16:15.188 "min_cntlid": 1, 00:16:15.188 "max_cntlid": 65519, 00:16:15.188 "ana_reporting": false 00:16:15.188 } 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "method": "nvmf_subsystem_add_host", 00:16:15.188 "params": { 00:16:15.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.188 "host": "nqn.2016-06.io.spdk:host1", 00:16:15.188 "psk": "key0" 00:16:15.188 } 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "method": "nvmf_subsystem_add_ns", 00:16:15.188 "params": { 00:16:15.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.188 "namespace": { 00:16:15.188 "nsid": 1, 00:16:15.188 "bdev_name": "malloc0", 00:16:15.188 "nguid": "52E79C0D1AD04B779B8D8765570A2C63", 00:16:15.188 "uuid": "52e79c0d-1ad0-4b77-9b8d-8765570a2c63", 00:16:15.188 "no_auto_visible": false 00:16:15.188 } 00:16:15.188 } 00:16:15.188 }, 00:16:15.188 { 00:16:15.188 "method": "nvmf_subsystem_add_listener", 00:16:15.188 "params": { 00:16:15.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.188 "listen_address": { 00:16:15.188 "trtype": "TCP", 00:16:15.188 "adrfam": "IPv4", 00:16:15.188 "traddr": "10.0.0.2", 00:16:15.188 "trsvcid": "4420" 00:16:15.188 }, 00:16:15.188 "secure_channel": false, 00:16:15.188 "sock_impl": "ssl" 00:16:15.188 } 00:16:15.188 } 00:16:15.188 ] 00:16:15.188 } 00:16:15.188 ] 00:16:15.188 }' 00:16:15.188 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:15.446 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.446 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73217 00:16:15.446 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:15.446 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73217 00:16:15.446 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73217 ']' 00:16:15.446 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.446 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:15.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.446 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.446 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:15.446 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.446 [2024-07-25 14:04:24.552673] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:15.446 [2024-07-25 14:04:24.552739] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.446 [2024-07-25 14:04:24.690821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.704 [2024-07-25 14:04:24.791050] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.704 [2024-07-25 14:04:24.791192] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.704 [2024-07-25 14:04:24.791232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.704 [2024-07-25 14:04:24.791262] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.704 [2024-07-25 14:04:24.791279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.704 [2024-07-25 14:04:24.791367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.704 [2024-07-25 14:04:24.945308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:15.961 [2024-07-25 14:04:25.013481] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.961 [2024-07-25 14:04:25.045392] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:15.961 [2024-07-25 14:04:25.053469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=73249 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 73249 /var/tmp/bdevperf.sock 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73249 ']' 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:16.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:16.221 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:16:16.221 "subsystems": [ 00:16:16.221 { 00:16:16.221 "subsystem": "keyring", 00:16:16.221 "config": [ 00:16:16.221 { 00:16:16.221 "method": "keyring_file_add_key", 00:16:16.221 "params": { 00:16:16.221 "name": "key0", 00:16:16.221 "path": "/tmp/tmp.xCUlETHsE5" 00:16:16.221 } 00:16:16.221 } 00:16:16.221 ] 00:16:16.221 }, 00:16:16.221 { 00:16:16.221 "subsystem": "iobuf", 00:16:16.221 "config": [ 00:16:16.221 { 00:16:16.221 "method": "iobuf_set_options", 00:16:16.221 "params": { 00:16:16.221 "small_pool_count": 8192, 00:16:16.221 "large_pool_count": 1024, 00:16:16.221 "small_bufsize": 8192, 00:16:16.221 "large_bufsize": 135168 00:16:16.221 } 00:16:16.221 } 00:16:16.221 ] 00:16:16.221 }, 00:16:16.221 { 00:16:16.221 "subsystem": "sock", 00:16:16.221 "config": [ 00:16:16.221 { 00:16:16.221 "method": "sock_set_default_impl", 00:16:16.221 "params": { 00:16:16.221 "impl_name": "uring" 00:16:16.221 } 00:16:16.221 }, 00:16:16.221 { 00:16:16.221 "method": "sock_impl_set_options", 00:16:16.221 "params": { 00:16:16.222 "impl_name": "ssl", 00:16:16.222 "recv_buf_size": 4096, 00:16:16.222 "send_buf_size": 4096, 00:16:16.222 "enable_recv_pipe": true, 00:16:16.222 "enable_quickack": false, 00:16:16.222 "enable_placement_id": 0, 00:16:16.222 "enable_zerocopy_send_server": true, 00:16:16.222 "enable_zerocopy_send_client": false, 00:16:16.222 "zerocopy_threshold": 0, 00:16:16.222 "tls_version": 0, 00:16:16.222 "enable_ktls": false 00:16:16.222 } 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "method": "sock_impl_set_options", 00:16:16.222 "params": { 00:16:16.222 "impl_name": "posix", 00:16:16.222 "recv_buf_size": 2097152, 00:16:16.222 "send_buf_size": 2097152, 00:16:16.222 "enable_recv_pipe": true, 00:16:16.222 "enable_quickack": false, 00:16:16.222 "enable_placement_id": 0, 00:16:16.222 "enable_zerocopy_send_server": true, 00:16:16.222 "enable_zerocopy_send_client": false, 00:16:16.222 "zerocopy_threshold": 0, 00:16:16.222 "tls_version": 0, 00:16:16.222 "enable_ktls": false 00:16:16.222 } 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "method": "sock_impl_set_options", 00:16:16.222 "params": { 00:16:16.222 "impl_name": "uring", 00:16:16.222 "recv_buf_size": 2097152, 00:16:16.222 "send_buf_size": 2097152, 00:16:16.222 "enable_recv_pipe": true, 00:16:16.222 "enable_quickack": false, 00:16:16.222 "enable_placement_id": 0, 00:16:16.222 "enable_zerocopy_send_server": false, 00:16:16.222 "enable_zerocopy_send_client": false, 00:16:16.222 "zerocopy_threshold": 0, 00:16:16.222 "tls_version": 0, 00:16:16.222 "enable_ktls": false 00:16:16.222 } 00:16:16.222 } 00:16:16.222 ] 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "subsystem": "vmd", 00:16:16.222 "config": [] 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "subsystem": "accel", 00:16:16.222 "config": [ 00:16:16.222 { 00:16:16.222 "method": "accel_set_options", 00:16:16.222 "params": { 00:16:16.222 "small_cache_size": 128, 00:16:16.222 "large_cache_size": 16, 00:16:16.222 "task_count": 2048, 00:16:16.222 "sequence_count": 2048, 00:16:16.222 "buf_count": 2048 00:16:16.222 } 00:16:16.222 } 00:16:16.222 ] 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "subsystem": "bdev", 00:16:16.222 "config": [ 00:16:16.222 { 00:16:16.222 "method": "bdev_set_options", 00:16:16.222 "params": { 00:16:16.222 "bdev_io_pool_size": 65535, 00:16:16.222 "bdev_io_cache_size": 256, 00:16:16.222 "bdev_auto_examine": true, 00:16:16.222 "iobuf_small_cache_size": 128, 00:16:16.222 "iobuf_large_cache_size": 16 00:16:16.222 } 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "method": "bdev_raid_set_options", 00:16:16.222 "params": { 00:16:16.222 "process_window_size_kb": 1024, 00:16:16.222 "process_max_bandwidth_mb_sec": 0 00:16:16.222 } 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "method": "bdev_iscsi_set_options", 00:16:16.222 "params": { 00:16:16.222 "timeout_sec": 30 00:16:16.222 } 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "method": "bdev_nvme_set_options", 00:16:16.222 "params": { 00:16:16.222 "action_on_timeout": "none", 00:16:16.222 "timeout_us": 0, 00:16:16.222 "timeout_admin_us": 0, 00:16:16.222 "keep_alive_timeout_ms": 10000, 00:16:16.222 "arbitration_burst": 0, 00:16:16.222 "low_priority_weight": 0, 00:16:16.222 "medium_priority_weight": 0, 00:16:16.222 "high_priority_weight": 0, 00:16:16.222 "nvme_adminq_poll_period_us": 10000, 00:16:16.222 "nvme_ioq_poll_period_us": 0, 00:16:16.222 "io_queue_requests": 512, 00:16:16.222 "delay_cmd_submit": true, 00:16:16.222 "transport_retry_count": 4, 00:16:16.222 "bdev_retry_count": 3, 00:16:16.222 "transport_ack_timeout": 0, 00:16:16.222 "ctrlr_loss_timeout_sec": 0, 00:16:16.222 "reconnect_delay_sec": 0, 00:16:16.222 "fast_io_fail_timeout_sec": 0, 00:16:16.222 "disable_auto_failback": false, 00:16:16.222 "generate_uuids": false, 00:16:16.222 "transport_tos": 0, 00:16:16.222 "nvme_error_stat": false, 00:16:16.222 "rdma_srq_size": 0, 00:16:16.222 "io_path_stat": false, 00:16:16.222 "allow_accel_sequence": false, 00:16:16.222 "rdma_max_cq_size": 0, 00:16:16.222 "rdma_cm_event_timeout_ms": 0, 00:16:16.222 "dhchap_digests": [ 00:16:16.222 "sha256", 00:16:16.222 "sha384", 00:16:16.222 "sha512" 00:16:16.222 ], 00:16:16.222 "dhchap_dhgroups": [ 00:16:16.222 "null", 00:16:16.222 "ffdhe2048", 00:16:16.222 "ffdhe3072", 00:16:16.222 "ffdhe4096", 00:16:16.222 "ffdhe6144", 00:16:16.222 "ffdhe8192" 00:16:16.222 ] 00:16:16.222 } 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "method": "bdev_nvme_attach_controller", 00:16:16.222 "params": { 00:16:16.222 "name": "nvme0", 00:16:16.222 "trtype": "TCP", 00:16:16.222 "adrfam": "IPv4", 00:16:16.222 "traddr": "10.0.0.2", 00:16:16.222 "trsvcid": "4420", 00:16:16.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:16.222 "prchk_reftag": false, 00:16:16.222 "prchk_guard": false, 00:16:16.222 "ctrlr_loss_timeout_sec": 0, 00:16:16.222 "reconnect_delay_sec": 0, 00:16:16.222 "fast_io_fail_timeout_sec": 0, 00:16:16.222 "psk": "key0", 00:16:16.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:16.222 "hdgst": false, 00:16:16.222 "ddgst": false 00:16:16.222 } 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "method": "bdev_nvme_set_hotplug", 00:16:16.222 "params": { 00:16:16.222 "period_us": 100000, 00:16:16.222 "enable": false 00:16:16.222 } 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "method": "bdev_enable_histogram", 00:16:16.222 "params": { 00:16:16.222 "name": "nvme0n1", 00:16:16.222 "enable": true 00:16:16.222 } 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "method": "bdev_wait_for_examine" 00:16:16.222 } 00:16:16.222 ] 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "subsystem": "nbd", 00:16:16.222 "config": [] 00:16:16.222 } 00:16:16.222 ] 00:16:16.222 }' 00:16:16.222 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:16.222 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:16.222 [2024-07-25 14:04:25.507772] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:16.222 [2024-07-25 14:04:25.508431] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73249 ] 00:16:16.530 [2024-07-25 14:04:25.648617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.530 [2024-07-25 14:04:25.749736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.803 [2024-07-25 14:04:25.872819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:16.803 [2024-07-25 14:04:25.914101] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:17.062 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:17.062 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:17.062 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:17.062 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:16:17.322 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.322 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:17.581 Running I/O for 1 seconds... 00:16:18.519 00:16:18.520 Latency(us) 00:16:18.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.520 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:18.520 Verification LBA range: start 0x0 length 0x2000 00:16:18.520 nvme0n1 : 1.01 5549.43 21.68 0.00 0.00 22890.62 4807.88 24726.25 00:16:18.520 =================================================================================================================== 00:16:18.520 Total : 5549.43 21.68 0.00 0.00 22890.62 4807.88 24726.25 00:16:18.520 0 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:18.520 nvmf_trace.0 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 73249 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73249 ']' 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73249 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.520 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73249 00:16:18.778 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:18.778 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:18.778 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73249' 00:16:18.778 killing process with pid 73249 00:16:18.778 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73249 00:16:18.778 Received shutdown signal, test time was about 1.000000 seconds 00:16:18.778 00:16:18.778 Latency(us) 00:16:18.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.778 =================================================================================================================== 00:16:18.778 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:18.778 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73249 00:16:18.778 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:18.778 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:18.778 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:16:18.778 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:18.778 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:16:18.778 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:18.778 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:18.778 rmmod nvme_tcp 00:16:19.037 rmmod nvme_fabrics 00:16:19.037 rmmod nvme_keyring 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73217 ']' 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73217 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73217 ']' 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73217 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73217 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73217' 00:16:19.037 killing process with pid 73217 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73217 00:16:19.037 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73217 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.rNpE9rjilP /tmp/tmp.LsuX1K0dqx /tmp/tmp.xCUlETHsE5 00:16:19.295 00:16:19.295 real 1m22.116s 00:16:19.295 user 2m10.629s 00:16:19.295 sys 0m25.277s 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.295 ************************************ 00:16:19.295 END TEST nvmf_tls 00:16:19.295 ************************************ 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:19.295 ************************************ 00:16:19.295 START TEST nvmf_fips 00:16:19.295 ************************************ 00:16:19.295 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:19.555 * Looking for test storage... 00:16:19.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.555 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:16:19.556 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:16:19.817 Error setting digest 00:16:19.817 00D2995BDB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:16:19.817 00D2995BDB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:19.817 Cannot find device "nvmf_tgt_br" 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.817 Cannot find device "nvmf_tgt_br2" 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:19.817 Cannot find device "nvmf_tgt_br" 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:19.817 Cannot find device "nvmf_tgt_br2" 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:16:19.817 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:19.817 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:19.817 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.817 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:19.817 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.817 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:19.817 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:19.817 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:19.817 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:19.817 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:20.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:16:20.079 00:16:20.079 --- 10.0.0.2 ping statistics --- 00:16:20.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.079 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:20.079 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.079 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:16:20.079 00:16:20.079 --- 10.0.0.3 ping statistics --- 00:16:20.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.079 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:16:20.079 00:16:20.079 --- 10.0.0.1 ping statistics --- 00:16:20.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.079 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:20.079 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=73515 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 73515 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73515 ']' 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.080 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:20.339 [2024-07-25 14:04:29.391878] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:20.339 [2024-07-25 14:04:29.391951] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.339 [2024-07-25 14:04:29.531759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.339 [2024-07-25 14:04:29.630161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.339 [2024-07-25 14:04:29.630213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.339 [2024-07-25 14:04:29.630219] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.339 [2024-07-25 14:04:29.630224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.339 [2024-07-25 14:04:29.630228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.339 [2024-07-25 14:04:29.630248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.598 [2024-07-25 14:04:29.671428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:21.165 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:21.165 [2024-07-25 14:04:30.449161] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.165 [2024-07-25 14:04:30.465051] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:21.165 [2024-07-25 14:04:30.465234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.424 [2024-07-25 14:04:30.493951] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:21.424 malloc0 00:16:21.424 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:21.424 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=73549 00:16:21.424 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:21.424 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 73549 /var/tmp/bdevperf.sock 00:16:21.424 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73549 ']' 00:16:21.424 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:21.424 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:21.424 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:21.424 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.424 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:21.424 [2024-07-25 14:04:30.597897] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:21.424 [2024-07-25 14:04:30.597965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73549 ] 00:16:21.683 [2024-07-25 14:04:30.737217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.683 [2024-07-25 14:04:30.834106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.683 [2024-07-25 14:04:30.876334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:22.248 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.248 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:16:22.248 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:22.505 [2024-07-25 14:04:31.600937] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:22.505 [2024-07-25 14:04:31.601051] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:22.505 TLSTESTn1 00:16:22.505 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:22.505 Running I/O for 10 seconds... 00:16:32.493 00:16:32.493 Latency(us) 00:16:32.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.493 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:32.493 Verification LBA range: start 0x0 length 0x2000 00:16:32.493 TLSTESTn1 : 10.01 6044.46 23.61 0.00 0.00 21140.54 4435.84 17514.42 00:16:32.493 =================================================================================================================== 00:16:32.493 Total : 6044.46 23.61 0.00 0.00 21140.54 4435.84 17514.42 00:16:32.493 0 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:32.751 nvmf_trace.0 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73549 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73549 ']' 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73549 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73549 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:32.751 killing process with pid 73549 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73549' 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73549 00:16:32.751 Received shutdown signal, test time was about 10.000000 seconds 00:16:32.751 00:16:32.751 Latency(us) 00:16:32.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.751 =================================================================================================================== 00:16:32.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:32.751 [2024-07-25 14:04:41.930857] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:32.751 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73549 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:33.009 rmmod nvme_tcp 00:16:33.009 rmmod nvme_fabrics 00:16:33.009 rmmod nvme_keyring 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 73515 ']' 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 73515 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73515 ']' 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73515 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73515 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:33.009 killing process with pid 73515 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73515' 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73515 00:16:33.009 [2024-07-25 14:04:42.266253] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:33.009 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73515 00:16:33.267 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:33.267 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:33.267 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:33.267 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.267 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:33.267 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.267 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.267 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.267 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:33.267 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:33.267 00:16:33.267 real 0m14.036s 00:16:33.267 user 0m19.201s 00:16:33.267 sys 0m5.382s 00:16:33.267 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.267 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:33.267 ************************************ 00:16:33.267 END TEST nvmf_fips 00:16:33.267 ************************************ 00:16:33.525 14:04:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:16:33.525 14:04:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:16:33.525 14:04:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:16:33.525 00:16:33.525 real 4m13.457s 00:16:33.525 user 8m44.637s 00:16:33.525 sys 0m55.553s 00:16:33.525 14:04:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.525 14:04:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:33.525 ************************************ 00:16:33.525 END TEST nvmf_target_extra 00:16:33.525 ************************************ 00:16:33.525 14:04:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:33.525 14:04:42 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:33.525 14:04:42 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.525 14:04:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:33.525 ************************************ 00:16:33.525 START TEST nvmf_host 00:16:33.525 ************************************ 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:33.525 * Looking for test storage... 00:16:33.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.525 14:04:42 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.526 ************************************ 00:16:33.526 START TEST nvmf_identify 00:16:33.526 ************************************ 00:16:33.526 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:33.786 * Looking for test storage... 00:16:33.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:33.786 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:33.787 14:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:33.787 Cannot find device "nvmf_tgt_br" 00:16:33.787 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:16:33.787 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:33.787 Cannot find device "nvmf_tgt_br2" 00:16:33.787 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:16:33.787 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:33.787 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:33.787 Cannot find device "nvmf_tgt_br" 00:16:33.787 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:16:33.787 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:33.787 Cannot find device "nvmf_tgt_br2" 00:16:33.787 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:16:33.787 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:34.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:34.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:34.046 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:34.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:16:34.047 00:16:34.047 --- 10.0.0.2 ping statistics --- 00:16:34.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.047 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:34.047 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:34.047 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:16:34.047 00:16:34.047 --- 10.0.0.3 ping statistics --- 00:16:34.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.047 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:34.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:16:34.047 00:16:34.047 --- 10.0.0.1 ping statistics --- 00:16:34.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.047 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73928 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73928 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 73928 ']' 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.047 14:04:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:34.306 [2024-07-25 14:04:43.369185] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:34.306 [2024-07-25 14:04:43.369666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.306 [2024-07-25 14:04:43.503003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.563 [2024-07-25 14:04:43.629782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.563 [2024-07-25 14:04:43.629908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.563 [2024-07-25 14:04:43.629932] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.564 [2024-07-25 14:04:43.629951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.564 [2024-07-25 14:04:43.629967] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.564 [2024-07-25 14:04:43.630231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.564 [2024-07-25 14:04:43.630416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.564 [2024-07-25 14:04:43.630466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.564 [2024-07-25 14:04:43.630476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.564 [2024-07-25 14:04:43.679331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:35.129 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.129 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:16:35.129 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.129 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.129 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.129 [2024-07-25 14:04:44.232362] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.129 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.129 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:35.129 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:35.129 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.129 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:35.129 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.130 Malloc0 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.130 [2024-07-25 14:04:44.357970] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.130 [ 00:16:35.130 { 00:16:35.130 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:35.130 "subtype": "Discovery", 00:16:35.130 "listen_addresses": [ 00:16:35.130 { 00:16:35.130 "trtype": "TCP", 00:16:35.130 "adrfam": "IPv4", 00:16:35.130 "traddr": "10.0.0.2", 00:16:35.130 "trsvcid": "4420" 00:16:35.130 } 00:16:35.130 ], 00:16:35.130 "allow_any_host": true, 00:16:35.130 "hosts": [] 00:16:35.130 }, 00:16:35.130 { 00:16:35.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.130 "subtype": "NVMe", 00:16:35.130 "listen_addresses": [ 00:16:35.130 { 00:16:35.130 "trtype": "TCP", 00:16:35.130 "adrfam": "IPv4", 00:16:35.130 "traddr": "10.0.0.2", 00:16:35.130 "trsvcid": "4420" 00:16:35.130 } 00:16:35.130 ], 00:16:35.130 "allow_any_host": true, 00:16:35.130 "hosts": [], 00:16:35.130 "serial_number": "SPDK00000000000001", 00:16:35.130 "model_number": "SPDK bdev Controller", 00:16:35.130 "max_namespaces": 32, 00:16:35.130 "min_cntlid": 1, 00:16:35.130 "max_cntlid": 65519, 00:16:35.130 "namespaces": [ 00:16:35.130 { 00:16:35.130 "nsid": 1, 00:16:35.130 "bdev_name": "Malloc0", 00:16:35.130 "name": "Malloc0", 00:16:35.130 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:35.130 "eui64": "ABCDEF0123456789", 00:16:35.130 "uuid": "16c00bb8-b022-4986-9e37-01dea2f2b027" 00:16:35.130 } 00:16:35.130 ] 00:16:35.130 } 00:16:35.130 ] 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.130 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:35.130 [2024-07-25 14:04:44.421221] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:35.130 [2024-07-25 14:04:44.421271] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73968 ] 00:16:35.392 [2024-07-25 14:04:44.551273] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:35.392 [2024-07-25 14:04:44.551367] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:35.392 [2024-07-25 14:04:44.551372] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:35.392 [2024-07-25 14:04:44.551383] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:35.392 [2024-07-25 14:04:44.551393] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:35.392 [2024-07-25 14:04:44.551513] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:35.392 [2024-07-25 14:04:44.551547] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13b52c0 0 00:16:35.392 [2024-07-25 14:04:44.559330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:35.392 [2024-07-25 14:04:44.559357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:35.392 [2024-07-25 14:04:44.559362] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:35.392 [2024-07-25 14:04:44.559365] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:35.392 [2024-07-25 14:04:44.559413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.559420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.559423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b52c0) 00:16:35.392 [2024-07-25 14:04:44.559438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:35.392 [2024-07-25 14:04:44.559467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6940, cid 0, qid 0 00:16:35.392 [2024-07-25 14:04:44.567329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.392 [2024-07-25 14:04:44.567354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.392 [2024-07-25 14:04:44.567359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.567365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6940) on tqpair=0x13b52c0 00:16:35.392 [2024-07-25 14:04:44.567379] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:35.392 [2024-07-25 14:04:44.567390] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:35.392 [2024-07-25 14:04:44.567395] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:35.392 [2024-07-25 14:04:44.567431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.567435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.567438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b52c0) 00:16:35.392 [2024-07-25 14:04:44.567449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.392 [2024-07-25 14:04:44.567476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6940, cid 0, qid 0 00:16:35.392 [2024-07-25 14:04:44.567535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.392 [2024-07-25 14:04:44.567544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.392 [2024-07-25 14:04:44.567548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.567552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6940) on tqpair=0x13b52c0 00:16:35.392 [2024-07-25 14:04:44.567557] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:35.392 [2024-07-25 14:04:44.567562] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:35.392 [2024-07-25 14:04:44.567569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.567572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.567575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b52c0) 00:16:35.392 [2024-07-25 14:04:44.567582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.392 [2024-07-25 14:04:44.567602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6940, cid 0, qid 0 00:16:35.392 [2024-07-25 14:04:44.567639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.392 [2024-07-25 14:04:44.567644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.392 [2024-07-25 14:04:44.567647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.567650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6940) on tqpair=0x13b52c0 00:16:35.392 [2024-07-25 14:04:44.567654] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:35.392 [2024-07-25 14:04:44.567661] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:35.392 [2024-07-25 14:04:44.567667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.567670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.567673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b52c0) 00:16:35.392 [2024-07-25 14:04:44.567678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.392 [2024-07-25 14:04:44.567692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6940, cid 0, qid 0 00:16:35.392 [2024-07-25 14:04:44.567738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.392 [2024-07-25 14:04:44.567748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.392 [2024-07-25 14:04:44.567752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.567754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6940) on tqpair=0x13b52c0 00:16:35.392 [2024-07-25 14:04:44.567759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:35.392 [2024-07-25 14:04:44.567766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.567770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.392 [2024-07-25 14:04:44.567773] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b52c0) 00:16:35.393 [2024-07-25 14:04:44.567779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.393 [2024-07-25 14:04:44.567791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6940, cid 0, qid 0 00:16:35.393 [2024-07-25 14:04:44.567827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.393 [2024-07-25 14:04:44.567832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.393 [2024-07-25 14:04:44.567835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.567838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6940) on tqpair=0x13b52c0 00:16:35.393 [2024-07-25 14:04:44.567841] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:35.393 [2024-07-25 14:04:44.567845] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:35.393 [2024-07-25 14:04:44.567851] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:35.393 [2024-07-25 14:04:44.567955] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:35.393 [2024-07-25 14:04:44.567965] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:35.393 [2024-07-25 14:04:44.567973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.567976] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.567979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b52c0) 00:16:35.393 [2024-07-25 14:04:44.567985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.393 [2024-07-25 14:04:44.568000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6940, cid 0, qid 0 00:16:35.393 [2024-07-25 14:04:44.568049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.393 [2024-07-25 14:04:44.568055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.393 [2024-07-25 14:04:44.568057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6940) on tqpair=0x13b52c0 00:16:35.393 [2024-07-25 14:04:44.568064] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:35.393 [2024-07-25 14:04:44.568072] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b52c0) 00:16:35.393 [2024-07-25 14:04:44.568083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.393 [2024-07-25 14:04:44.568095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6940, cid 0, qid 0 00:16:35.393 [2024-07-25 14:04:44.568133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.393 [2024-07-25 14:04:44.568138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.393 [2024-07-25 14:04:44.568141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6940) on tqpair=0x13b52c0 00:16:35.393 [2024-07-25 14:04:44.568147] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:35.393 [2024-07-25 14:04:44.568151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:35.393 [2024-07-25 14:04:44.568157] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:35.393 [2024-07-25 14:04:44.568165] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:35.393 [2024-07-25 14:04:44.568175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b52c0) 00:16:35.393 [2024-07-25 14:04:44.568184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.393 [2024-07-25 14:04:44.568196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6940, cid 0, qid 0 00:16:35.393 [2024-07-25 14:04:44.568279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.393 [2024-07-25 14:04:44.568286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.393 [2024-07-25 14:04:44.568290] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568295] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b52c0): datao=0, datal=4096, cccid=0 00:16:35.393 [2024-07-25 14:04:44.568313] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13f6940) on tqpair(0x13b52c0): expected_datao=0, payload_size=4096 00:16:35.393 [2024-07-25 14:04:44.568317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568324] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568328] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.393 [2024-07-25 14:04:44.568340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.393 [2024-07-25 14:04:44.568343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6940) on tqpair=0x13b52c0 00:16:35.393 [2024-07-25 14:04:44.568354] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:35.393 [2024-07-25 14:04:44.568358] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:35.393 [2024-07-25 14:04:44.568361] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:35.393 [2024-07-25 14:04:44.568369] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:35.393 [2024-07-25 14:04:44.568373] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:35.393 [2024-07-25 14:04:44.568376] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:35.393 [2024-07-25 14:04:44.568384] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:35.393 [2024-07-25 14:04:44.568390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b52c0) 00:16:35.393 [2024-07-25 14:04:44.568402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.393 [2024-07-25 14:04:44.568418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6940, cid 0, qid 0 00:16:35.393 [2024-07-25 14:04:44.568467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.393 [2024-07-25 14:04:44.568472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.393 [2024-07-25 14:04:44.568475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6940) on tqpair=0x13b52c0 00:16:35.393 [2024-07-25 14:04:44.568485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b52c0) 00:16:35.393 [2024-07-25 14:04:44.568496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.393 [2024-07-25 14:04:44.568501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13b52c0) 00:16:35.393 [2024-07-25 14:04:44.568511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.393 [2024-07-25 14:04:44.568516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568522] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13b52c0) 00:16:35.393 [2024-07-25 14:04:44.568526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.393 [2024-07-25 14:04:44.568531] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.393 [2024-07-25 14:04:44.568541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.393 [2024-07-25 14:04:44.568545] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:35.393 [2024-07-25 14:04:44.568551] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:35.393 [2024-07-25 14:04:44.568556] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.393 [2024-07-25 14:04:44.568559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b52c0) 00:16:35.393 [2024-07-25 14:04:44.568565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.393 [2024-07-25 14:04:44.568582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6940, cid 0, qid 0 00:16:35.393 [2024-07-25 14:04:44.568587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6ac0, cid 1, qid 0 00:16:35.393 [2024-07-25 14:04:44.568591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6c40, cid 2, qid 0 00:16:35.394 [2024-07-25 14:04:44.568595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.394 [2024-07-25 14:04:44.568598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6f40, cid 4, qid 0 00:16:35.394 [2024-07-25 14:04:44.568679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.394 [2024-07-25 14:04:44.568684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.394 [2024-07-25 14:04:44.568687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.568689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6f40) on tqpair=0x13b52c0 00:16:35.394 [2024-07-25 14:04:44.568694] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:35.394 [2024-07-25 14:04:44.568698] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:35.394 [2024-07-25 14:04:44.568706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.568709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b52c0) 00:16:35.394 [2024-07-25 14:04:44.568715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.394 [2024-07-25 14:04:44.568727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6f40, cid 4, qid 0 00:16:35.394 [2024-07-25 14:04:44.568774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.394 [2024-07-25 14:04:44.568779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.394 [2024-07-25 14:04:44.568782] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.568785] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b52c0): datao=0, datal=4096, cccid=4 00:16:35.394 [2024-07-25 14:04:44.568788] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13f6f40) on tqpair(0x13b52c0): expected_datao=0, payload_size=4096 00:16:35.394 [2024-07-25 14:04:44.568791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.568797] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.568800] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.568806] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.394 [2024-07-25 14:04:44.568811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.394 [2024-07-25 14:04:44.568814] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.568817] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6f40) on tqpair=0x13b52c0 00:16:35.394 [2024-07-25 14:04:44.568827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:35.394 [2024-07-25 14:04:44.568851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.568854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b52c0) 00:16:35.394 [2024-07-25 14:04:44.568860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.394 [2024-07-25 14:04:44.568866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.568869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.568871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13b52c0) 00:16:35.394 [2024-07-25 14:04:44.568877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.394 [2024-07-25 14:04:44.568894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6f40, cid 4, qid 0 00:16:35.394 [2024-07-25 14:04:44.568899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f70c0, cid 5, qid 0 00:16:35.394 [2024-07-25 14:04:44.568994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.394 [2024-07-25 14:04:44.569001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.394 [2024-07-25 14:04:44.569004] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569006] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b52c0): datao=0, datal=1024, cccid=4 00:16:35.394 [2024-07-25 14:04:44.569009] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13f6f40) on tqpair(0x13b52c0): expected_datao=0, payload_size=1024 00:16:35.394 [2024-07-25 14:04:44.569013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569018] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569021] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.394 [2024-07-25 14:04:44.569031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.394 [2024-07-25 14:04:44.569033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f70c0) on tqpair=0x13b52c0 00:16:35.394 [2024-07-25 14:04:44.569050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.394 [2024-07-25 14:04:44.569055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.394 [2024-07-25 14:04:44.569058] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6f40) on tqpair=0x13b52c0 00:16:35.394 [2024-07-25 14:04:44.569070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b52c0) 00:16:35.394 [2024-07-25 14:04:44.569078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.394 [2024-07-25 14:04:44.569092] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6f40, cid 4, qid 0 00:16:35.394 [2024-07-25 14:04:44.569145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.394 [2024-07-25 14:04:44.569151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.394 [2024-07-25 14:04:44.569153] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569156] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b52c0): datao=0, datal=3072, cccid=4 00:16:35.394 [2024-07-25 14:04:44.569159] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13f6f40) on tqpair(0x13b52c0): expected_datao=0, payload_size=3072 00:16:35.394 [2024-07-25 14:04:44.569162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569169] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569172] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.394 [2024-07-25 14:04:44.569183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.394 [2024-07-25 14:04:44.569186] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6f40) on tqpair=0x13b52c0 00:16:35.394 [2024-07-25 14:04:44.569196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b52c0) 00:16:35.394 [2024-07-25 14:04:44.569204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.394 [2024-07-25 14:04:44.569220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6f40, cid 4, qid 0 00:16:35.394 [2024-07-25 14:04:44.569277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.394 [2024-07-25 14:04:44.569282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.394 [2024-07-25 14:04:44.569285] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569288] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b52c0): datao=0, datal=8, cccid=4 00:16:35.394 [2024-07-25 14:04:44.569291] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13f6f40) on tqpair(0x13b52c0): expected_datao=0, payload_size=8 00:16:35.394 [2024-07-25 14:04:44.569294] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569310] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569313] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.394 [2024-07-25 14:04:44.569331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.394 [2024-07-25 14:04:44.569333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.394 [2024-07-25 14:04:44.569336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6f40) on tqpair=0x13b52c0 00:16:35.394 ===================================================== 00:16:35.394 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:35.394 ===================================================== 00:16:35.394 Controller Capabilities/Features 00:16:35.394 ================================ 00:16:35.394 Vendor ID: 0000 00:16:35.394 Subsystem Vendor ID: 0000 00:16:35.394 Serial Number: .................... 00:16:35.394 Model Number: ........................................ 00:16:35.394 Firmware Version: 24.09 00:16:35.394 Recommended Arb Burst: 0 00:16:35.394 IEEE OUI Identifier: 00 00 00 00:16:35.394 Multi-path I/O 00:16:35.394 May have multiple subsystem ports: No 00:16:35.394 May have multiple controllers: No 00:16:35.394 Associated with SR-IOV VF: No 00:16:35.394 Max Data Transfer Size: 131072 00:16:35.394 Max Number of Namespaces: 0 00:16:35.394 Max Number of I/O Queues: 1024 00:16:35.394 NVMe Specification Version (VS): 1.3 00:16:35.394 NVMe Specification Version (Identify): 1.3 00:16:35.394 Maximum Queue Entries: 128 00:16:35.394 Contiguous Queues Required: Yes 00:16:35.394 Arbitration Mechanisms Supported 00:16:35.394 Weighted Round Robin: Not Supported 00:16:35.394 Vendor Specific: Not Supported 00:16:35.395 Reset Timeout: 15000 ms 00:16:35.395 Doorbell Stride: 4 bytes 00:16:35.395 NVM Subsystem Reset: Not Supported 00:16:35.395 Command Sets Supported 00:16:35.395 NVM Command Set: Supported 00:16:35.395 Boot Partition: Not Supported 00:16:35.395 Memory Page Size Minimum: 4096 bytes 00:16:35.395 Memory Page Size Maximum: 4096 bytes 00:16:35.395 Persistent Memory Region: Not Supported 00:16:35.395 Optional Asynchronous Events Supported 00:16:35.395 Namespace Attribute Notices: Not Supported 00:16:35.395 Firmware Activation Notices: Not Supported 00:16:35.395 ANA Change Notices: Not Supported 00:16:35.395 PLE Aggregate Log Change Notices: Not Supported 00:16:35.395 LBA Status Info Alert Notices: Not Supported 00:16:35.395 EGE Aggregate Log Change Notices: Not Supported 00:16:35.395 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.395 Zone Descriptor Change Notices: Not Supported 00:16:35.395 Discovery Log Change Notices: Supported 00:16:35.395 Controller Attributes 00:16:35.395 128-bit Host Identifier: Not Supported 00:16:35.395 Non-Operational Permissive Mode: Not Supported 00:16:35.395 NVM Sets: Not Supported 00:16:35.395 Read Recovery Levels: Not Supported 00:16:35.395 Endurance Groups: Not Supported 00:16:35.395 Predictable Latency Mode: Not Supported 00:16:35.395 Traffic Based Keep ALive: Not Supported 00:16:35.395 Namespace Granularity: Not Supported 00:16:35.395 SQ Associations: Not Supported 00:16:35.395 UUID List: Not Supported 00:16:35.395 Multi-Domain Subsystem: Not Supported 00:16:35.395 Fixed Capacity Management: Not Supported 00:16:35.395 Variable Capacity Management: Not Supported 00:16:35.395 Delete Endurance Group: Not Supported 00:16:35.395 Delete NVM Set: Not Supported 00:16:35.395 Extended LBA Formats Supported: Not Supported 00:16:35.395 Flexible Data Placement Supported: Not Supported 00:16:35.395 00:16:35.395 Controller Memory Buffer Support 00:16:35.395 ================================ 00:16:35.395 Supported: No 00:16:35.395 00:16:35.395 Persistent Memory Region Support 00:16:35.395 ================================ 00:16:35.395 Supported: No 00:16:35.395 00:16:35.395 Admin Command Set Attributes 00:16:35.395 ============================ 00:16:35.395 Security Send/Receive: Not Supported 00:16:35.395 Format NVM: Not Supported 00:16:35.395 Firmware Activate/Download: Not Supported 00:16:35.395 Namespace Management: Not Supported 00:16:35.395 Device Self-Test: Not Supported 00:16:35.395 Directives: Not Supported 00:16:35.395 NVMe-MI: Not Supported 00:16:35.395 Virtualization Management: Not Supported 00:16:35.395 Doorbell Buffer Config: Not Supported 00:16:35.395 Get LBA Status Capability: Not Supported 00:16:35.395 Command & Feature Lockdown Capability: Not Supported 00:16:35.395 Abort Command Limit: 1 00:16:35.395 Async Event Request Limit: 4 00:16:35.395 Number of Firmware Slots: N/A 00:16:35.395 Firmware Slot 1 Read-Only: N/A 00:16:35.395 Firmware Activation Without Reset: N/A 00:16:35.395 Multiple Update Detection Support: N/A 00:16:35.395 Firmware Update Granularity: No Information Provided 00:16:35.395 Per-Namespace SMART Log: No 00:16:35.395 Asymmetric Namespace Access Log Page: Not Supported 00:16:35.395 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:35.395 Command Effects Log Page: Not Supported 00:16:35.395 Get Log Page Extended Data: Supported 00:16:35.395 Telemetry Log Pages: Not Supported 00:16:35.395 Persistent Event Log Pages: Not Supported 00:16:35.395 Supported Log Pages Log Page: May Support 00:16:35.395 Commands Supported & Effects Log Page: Not Supported 00:16:35.395 Feature Identifiers & Effects Log Page:May Support 00:16:35.395 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.395 Data Area 4 for Telemetry Log: Not Supported 00:16:35.395 Error Log Page Entries Supported: 128 00:16:35.395 Keep Alive: Not Supported 00:16:35.395 00:16:35.395 NVM Command Set Attributes 00:16:35.395 ========================== 00:16:35.395 Submission Queue Entry Size 00:16:35.395 Max: 1 00:16:35.395 Min: 1 00:16:35.395 Completion Queue Entry Size 00:16:35.395 Max: 1 00:16:35.395 Min: 1 00:16:35.395 Number of Namespaces: 0 00:16:35.395 Compare Command: Not Supported 00:16:35.395 Write Uncorrectable Command: Not Supported 00:16:35.395 Dataset Management Command: Not Supported 00:16:35.395 Write Zeroes Command: Not Supported 00:16:35.395 Set Features Save Field: Not Supported 00:16:35.395 Reservations: Not Supported 00:16:35.395 Timestamp: Not Supported 00:16:35.395 Copy: Not Supported 00:16:35.395 Volatile Write Cache: Not Present 00:16:35.395 Atomic Write Unit (Normal): 1 00:16:35.395 Atomic Write Unit (PFail): 1 00:16:35.395 Atomic Compare & Write Unit: 1 00:16:35.395 Fused Compare & Write: Supported 00:16:35.395 Scatter-Gather List 00:16:35.395 SGL Command Set: Supported 00:16:35.395 SGL Keyed: Supported 00:16:35.395 SGL Bit Bucket Descriptor: Not Supported 00:16:35.395 SGL Metadata Pointer: Not Supported 00:16:35.395 Oversized SGL: Not Supported 00:16:35.395 SGL Metadata Address: Not Supported 00:16:35.395 SGL Offset: Supported 00:16:35.395 Transport SGL Data Block: Not Supported 00:16:35.395 Replay Protected Memory Block: Not Supported 00:16:35.395 00:16:35.395 Firmware Slot Information 00:16:35.395 ========================= 00:16:35.395 Active slot: 0 00:16:35.395 00:16:35.395 00:16:35.395 Error Log 00:16:35.395 ========= 00:16:35.395 00:16:35.395 Active Namespaces 00:16:35.395 ================= 00:16:35.395 Discovery Log Page 00:16:35.395 ================== 00:16:35.395 Generation Counter: 2 00:16:35.395 Number of Records: 2 00:16:35.395 Record Format: 0 00:16:35.395 00:16:35.395 Discovery Log Entry 0 00:16:35.395 ---------------------- 00:16:35.395 Transport Type: 3 (TCP) 00:16:35.395 Address Family: 1 (IPv4) 00:16:35.395 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:35.395 Entry Flags: 00:16:35.395 Duplicate Returned Information: 1 00:16:35.395 Explicit Persistent Connection Support for Discovery: 1 00:16:35.395 Transport Requirements: 00:16:35.395 Secure Channel: Not Required 00:16:35.395 Port ID: 0 (0x0000) 00:16:35.395 Controller ID: 65535 (0xffff) 00:16:35.395 Admin Max SQ Size: 128 00:16:35.395 Transport Service Identifier: 4420 00:16:35.395 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:35.395 Transport Address: 10.0.0.2 00:16:35.395 Discovery Log Entry 1 00:16:35.395 ---------------------- 00:16:35.395 Transport Type: 3 (TCP) 00:16:35.395 Address Family: 1 (IPv4) 00:16:35.395 Subsystem Type: 2 (NVM Subsystem) 00:16:35.395 Entry Flags: 00:16:35.395 Duplicate Returned Information: 0 00:16:35.395 Explicit Persistent Connection Support for Discovery: 0 00:16:35.395 Transport Requirements: 00:16:35.395 Secure Channel: Not Required 00:16:35.395 Port ID: 0 (0x0000) 00:16:35.395 Controller ID: 65535 (0xffff) 00:16:35.395 Admin Max SQ Size: 128 00:16:35.395 Transport Service Identifier: 4420 00:16:35.395 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:35.395 Transport Address: 10.0.0.2 [2024-07-25 14:04:44.569433] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:35.395 [2024-07-25 14:04:44.569442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6940) on tqpair=0x13b52c0 00:16:35.395 [2024-07-25 14:04:44.569448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.396 [2024-07-25 14:04:44.569452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6ac0) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.569456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.396 [2024-07-25 14:04:44.569460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6c40) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.569463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.396 [2024-07-25 14:04:44.569467] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.569471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.396 [2024-07-25 14:04:44.569479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.396 [2024-07-25 14:04:44.569490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.396 [2024-07-25 14:04:44.569506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.396 [2024-07-25 14:04:44.569560] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.396 [2024-07-25 14:04:44.569567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.396 [2024-07-25 14:04:44.569570] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569573] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.569582] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.396 [2024-07-25 14:04:44.569593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.396 [2024-07-25 14:04:44.569609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.396 [2024-07-25 14:04:44.569668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.396 [2024-07-25 14:04:44.569673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.396 [2024-07-25 14:04:44.569675] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.569683] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:35.396 [2024-07-25 14:04:44.569686] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:35.396 [2024-07-25 14:04:44.569694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.396 [2024-07-25 14:04:44.569705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.396 [2024-07-25 14:04:44.569717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.396 [2024-07-25 14:04:44.569765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.396 [2024-07-25 14:04:44.569770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.396 [2024-07-25 14:04:44.569773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.569784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.396 [2024-07-25 14:04:44.569796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.396 [2024-07-25 14:04:44.569808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.396 [2024-07-25 14:04:44.569844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.396 [2024-07-25 14:04:44.569849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.396 [2024-07-25 14:04:44.569851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.569862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.396 [2024-07-25 14:04:44.569873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.396 [2024-07-25 14:04:44.569885] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.396 [2024-07-25 14:04:44.569927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.396 [2024-07-25 14:04:44.569932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.396 [2024-07-25 14:04:44.569935] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.569946] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.569951] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.396 [2024-07-25 14:04:44.569957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.396 [2024-07-25 14:04:44.569968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.396 [2024-07-25 14:04:44.570008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.396 [2024-07-25 14:04:44.570014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.396 [2024-07-25 14:04:44.570016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.570027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570033] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.396 [2024-07-25 14:04:44.570038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.396 [2024-07-25 14:04:44.570050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.396 [2024-07-25 14:04:44.570093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.396 [2024-07-25 14:04:44.570098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.396 [2024-07-25 14:04:44.570101] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.570112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.396 [2024-07-25 14:04:44.570123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.396 [2024-07-25 14:04:44.570135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.396 [2024-07-25 14:04:44.570175] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.396 [2024-07-25 14:04:44.570180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.396 [2024-07-25 14:04:44.570183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.570194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.396 [2024-07-25 14:04:44.570205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.396 [2024-07-25 14:04:44.570217] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.396 [2024-07-25 14:04:44.570256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.396 [2024-07-25 14:04:44.570262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.396 [2024-07-25 14:04:44.570264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.570275] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.396 [2024-07-25 14:04:44.570286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.396 [2024-07-25 14:04:44.570317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.396 [2024-07-25 14:04:44.570351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.396 [2024-07-25 14:04:44.570357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.396 [2024-07-25 14:04:44.570359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.396 [2024-07-25 14:04:44.570371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.396 [2024-07-25 14:04:44.570374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.397 [2024-07-25 14:04:44.570382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.397 [2024-07-25 14:04:44.570395] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.397 [2024-07-25 14:04:44.570437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.397 [2024-07-25 14:04:44.570442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.397 [2024-07-25 14:04:44.570445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.397 [2024-07-25 14:04:44.570456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.397 [2024-07-25 14:04:44.570467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.397 [2024-07-25 14:04:44.570479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.397 [2024-07-25 14:04:44.570520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.397 [2024-07-25 14:04:44.570526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.397 [2024-07-25 14:04:44.570528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.397 [2024-07-25 14:04:44.570539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.397 [2024-07-25 14:04:44.570551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.397 [2024-07-25 14:04:44.570562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.397 [2024-07-25 14:04:44.570603] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.397 [2024-07-25 14:04:44.570609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.397 [2024-07-25 14:04:44.570611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.397 [2024-07-25 14:04:44.570622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.397 [2024-07-25 14:04:44.570633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.397 [2024-07-25 14:04:44.570645] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.397 [2024-07-25 14:04:44.570682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.397 [2024-07-25 14:04:44.570688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.397 [2024-07-25 14:04:44.570690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.397 [2024-07-25 14:04:44.570701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.397 [2024-07-25 14:04:44.570713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.397 [2024-07-25 14:04:44.570725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.397 [2024-07-25 14:04:44.570765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.397 [2024-07-25 14:04:44.570770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.397 [2024-07-25 14:04:44.570772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.397 [2024-07-25 14:04:44.570783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.397 [2024-07-25 14:04:44.570795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.397 [2024-07-25 14:04:44.570806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.397 [2024-07-25 14:04:44.570847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.397 [2024-07-25 14:04:44.570853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.397 [2024-07-25 14:04:44.570855] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.397 [2024-07-25 14:04:44.570866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.397 [2024-07-25 14:04:44.570877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.397 [2024-07-25 14:04:44.570889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.397 [2024-07-25 14:04:44.570929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.397 [2024-07-25 14:04:44.570935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.397 [2024-07-25 14:04:44.570938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.397 [2024-07-25 14:04:44.570948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570951] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.570954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.397 [2024-07-25 14:04:44.570960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.397 [2024-07-25 14:04:44.570971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.397 [2024-07-25 14:04:44.571012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.397 [2024-07-25 14:04:44.571017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.397 [2024-07-25 14:04:44.571020] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.571023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.397 [2024-07-25 14:04:44.571030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.571034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.571036] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.397 [2024-07-25 14:04:44.571042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.397 [2024-07-25 14:04:44.571054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.397 [2024-07-25 14:04:44.571099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.397 [2024-07-25 14:04:44.571104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.397 [2024-07-25 14:04:44.571107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.571110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.397 [2024-07-25 14:04:44.571118] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.397 [2024-07-25 14:04:44.571121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.398 [2024-07-25 14:04:44.571124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.398 [2024-07-25 14:04:44.571129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.398 [2024-07-25 14:04:44.571141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.398 [2024-07-25 14:04:44.571181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.398 [2024-07-25 14:04:44.571186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.398 [2024-07-25 14:04:44.571189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.398 [2024-07-25 14:04:44.571191] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.398 [2024-07-25 14:04:44.571199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.398 [2024-07-25 14:04:44.571203] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.398 [2024-07-25 14:04:44.571205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.398 [2024-07-25 14:04:44.571211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.398 [2024-07-25 14:04:44.571223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.398 [2024-07-25 14:04:44.571265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.398 [2024-07-25 14:04:44.571271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.398 [2024-07-25 14:04:44.571274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.398 [2024-07-25 14:04:44.571277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.398 [2024-07-25 14:04:44.571284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.398 [2024-07-25 14:04:44.571287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.398 [2024-07-25 14:04:44.571290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b52c0) 00:16:35.398 [2024-07-25 14:04:44.575336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.398 [2024-07-25 14:04:44.575396] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13f6dc0, cid 3, qid 0 00:16:35.398 [2024-07-25 14:04:44.575447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.398 [2024-07-25 14:04:44.575456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.398 [2024-07-25 14:04:44.575461] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.398 [2024-07-25 14:04:44.575465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13f6dc0) on tqpair=0x13b52c0 00:16:35.398 [2024-07-25 14:04:44.575476] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:16:35.398 00:16:35.398 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:35.398 [2024-07-25 14:04:44.627150] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:35.398 [2024-07-25 14:04:44.627206] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73975 ] 00:16:35.660 [2024-07-25 14:04:44.758741] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:35.660 [2024-07-25 14:04:44.758843] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:35.660 [2024-07-25 14:04:44.758859] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:35.660 [2024-07-25 14:04:44.758870] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:35.660 [2024-07-25 14:04:44.758878] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:35.660 [2024-07-25 14:04:44.759012] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:35.660 [2024-07-25 14:04:44.759048] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x97a2c0 0 00:16:35.660 [2024-07-25 14:04:44.773332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:35.660 [2024-07-25 14:04:44.773360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:35.660 [2024-07-25 14:04:44.773364] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:35.660 [2024-07-25 14:04:44.773367] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:35.660 [2024-07-25 14:04:44.773415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.660 [2024-07-25 14:04:44.773420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.660 [2024-07-25 14:04:44.773424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x97a2c0) 00:16:35.660 [2024-07-25 14:04:44.773437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:35.660 [2024-07-25 14:04:44.773467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bb940, cid 0, qid 0 00:16:35.660 [2024-07-25 14:04:44.781331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.660 [2024-07-25 14:04:44.781350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.660 [2024-07-25 14:04:44.781354] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.660 [2024-07-25 14:04:44.781357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bb940) on tqpair=0x97a2c0 00:16:35.660 [2024-07-25 14:04:44.781368] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:35.660 [2024-07-25 14:04:44.781376] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:35.660 [2024-07-25 14:04:44.781380] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:35.660 [2024-07-25 14:04:44.781399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.660 [2024-07-25 14:04:44.781402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.660 [2024-07-25 14:04:44.781405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x97a2c0) 00:16:35.660 [2024-07-25 14:04:44.781415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.660 [2024-07-25 14:04:44.781439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bb940, cid 0, qid 0 00:16:35.660 [2024-07-25 14:04:44.781492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.660 [2024-07-25 14:04:44.781497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.660 [2024-07-25 14:04:44.781500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.660 [2024-07-25 14:04:44.781502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bb940) on tqpair=0x97a2c0 00:16:35.661 [2024-07-25 14:04:44.781509] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:35.661 [2024-07-25 14:04:44.781514] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:35.661 [2024-07-25 14:04:44.781520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x97a2c0) 00:16:35.661 [2024-07-25 14:04:44.781531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.661 [2024-07-25 14:04:44.781543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bb940, cid 0, qid 0 00:16:35.661 [2024-07-25 14:04:44.781590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.661 [2024-07-25 14:04:44.781596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.661 [2024-07-25 14:04:44.781598] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bb940) on tqpair=0x97a2c0 00:16:35.661 [2024-07-25 14:04:44.781605] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:35.661 [2024-07-25 14:04:44.781611] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:35.661 [2024-07-25 14:04:44.781616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x97a2c0) 00:16:35.661 [2024-07-25 14:04:44.781627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.661 [2024-07-25 14:04:44.781638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bb940, cid 0, qid 0 00:16:35.661 [2024-07-25 14:04:44.781675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.661 [2024-07-25 14:04:44.781680] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.661 [2024-07-25 14:04:44.781682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bb940) on tqpair=0x97a2c0 00:16:35.661 [2024-07-25 14:04:44.781689] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:35.661 [2024-07-25 14:04:44.781696] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x97a2c0) 00:16:35.661 [2024-07-25 14:04:44.781707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.661 [2024-07-25 14:04:44.781718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bb940, cid 0, qid 0 00:16:35.661 [2024-07-25 14:04:44.781760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.661 [2024-07-25 14:04:44.781765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.661 [2024-07-25 14:04:44.781767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bb940) on tqpair=0x97a2c0 00:16:35.661 [2024-07-25 14:04:44.781774] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:35.661 [2024-07-25 14:04:44.781777] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:35.661 [2024-07-25 14:04:44.781782] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:35.661 [2024-07-25 14:04:44.781886] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:35.661 [2024-07-25 14:04:44.781896] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:35.661 [2024-07-25 14:04:44.781904] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x97a2c0) 00:16:35.661 [2024-07-25 14:04:44.781916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.661 [2024-07-25 14:04:44.781928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bb940, cid 0, qid 0 00:16:35.661 [2024-07-25 14:04:44.781967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.661 [2024-07-25 14:04:44.781972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.661 [2024-07-25 14:04:44.781975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bb940) on tqpair=0x97a2c0 00:16:35.661 [2024-07-25 14:04:44.781981] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:35.661 [2024-07-25 14:04:44.781988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.781994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x97a2c0) 00:16:35.661 [2024-07-25 14:04:44.782000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.661 [2024-07-25 14:04:44.782011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bb940, cid 0, qid 0 00:16:35.661 [2024-07-25 14:04:44.782055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.661 [2024-07-25 14:04:44.782060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.661 [2024-07-25 14:04:44.782063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.782066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bb940) on tqpair=0x97a2c0 00:16:35.661 [2024-07-25 14:04:44.782069] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:35.661 [2024-07-25 14:04:44.782072] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:35.661 [2024-07-25 14:04:44.782078] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:35.661 [2024-07-25 14:04:44.782085] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:35.661 [2024-07-25 14:04:44.782095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.782098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x97a2c0) 00:16:35.661 [2024-07-25 14:04:44.782103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.661 [2024-07-25 14:04:44.782114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bb940, cid 0, qid 0 00:16:35.661 [2024-07-25 14:04:44.782203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.661 [2024-07-25 14:04:44.782208] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.661 [2024-07-25 14:04:44.782211] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.782214] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x97a2c0): datao=0, datal=4096, cccid=0 00:16:35.661 [2024-07-25 14:04:44.782217] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bb940) on tqpair(0x97a2c0): expected_datao=0, payload_size=4096 00:16:35.661 [2024-07-25 14:04:44.782220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.782227] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.782230] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.782237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.661 [2024-07-25 14:04:44.782242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.661 [2024-07-25 14:04:44.782244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.782247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bb940) on tqpair=0x97a2c0 00:16:35.661 [2024-07-25 14:04:44.782254] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:35.661 [2024-07-25 14:04:44.782258] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:35.661 [2024-07-25 14:04:44.782261] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:35.661 [2024-07-25 14:04:44.782267] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:35.661 [2024-07-25 14:04:44.782270] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:35.661 [2024-07-25 14:04:44.782273] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:35.661 [2024-07-25 14:04:44.782280] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:35.661 [2024-07-25 14:04:44.782285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.782288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.782291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x97a2c0) 00:16:35.661 [2024-07-25 14:04:44.782306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.661 [2024-07-25 14:04:44.782320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bb940, cid 0, qid 0 00:16:35.661 [2024-07-25 14:04:44.782372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.661 [2024-07-25 14:04:44.782377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.661 [2024-07-25 14:04:44.782379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.782382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bb940) on tqpair=0x97a2c0 00:16:35.661 [2024-07-25 14:04:44.782388] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.782391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 14:04:44.782394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x97a2c0) 00:16:35.661 [2024-07-25 14:04:44.782399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.662 [2024-07-25 14:04:44.782404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x97a2c0) 00:16:35.662 [2024-07-25 14:04:44.782413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.662 [2024-07-25 14:04:44.782418] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x97a2c0) 00:16:35.662 [2024-07-25 14:04:44.782428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.662 [2024-07-25 14:04:44.782433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.662 [2024-07-25 14:04:44.782442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.662 [2024-07-25 14:04:44.782445] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:35.662 [2024-07-25 14:04:44.782451] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:35.662 [2024-07-25 14:04:44.782456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x97a2c0) 00:16:35.662 [2024-07-25 14:04:44.782464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.662 [2024-07-25 14:04:44.782480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bb940, cid 0, qid 0 00:16:35.662 [2024-07-25 14:04:44.782485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbac0, cid 1, qid 0 00:16:35.662 [2024-07-25 14:04:44.782488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbc40, cid 2, qid 0 00:16:35.662 [2024-07-25 14:04:44.782492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.662 [2024-07-25 14:04:44.782495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbf40, cid 4, qid 0 00:16:35.662 [2024-07-25 14:04:44.782585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.662 [2024-07-25 14:04:44.782590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.662 [2024-07-25 14:04:44.782593] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbf40) on tqpair=0x97a2c0 00:16:35.662 [2024-07-25 14:04:44.782599] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:35.662 [2024-07-25 14:04:44.782603] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:35.662 [2024-07-25 14:04:44.782609] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:35.662 [2024-07-25 14:04:44.782614] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:35.662 [2024-07-25 14:04:44.782619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x97a2c0) 00:16:35.662 [2024-07-25 14:04:44.782629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.662 [2024-07-25 14:04:44.782640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbf40, cid 4, qid 0 00:16:35.662 [2024-07-25 14:04:44.782692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.662 [2024-07-25 14:04:44.782697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.662 [2024-07-25 14:04:44.782699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbf40) on tqpair=0x97a2c0 00:16:35.662 [2024-07-25 14:04:44.782760] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:35.662 [2024-07-25 14:04:44.782769] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:35.662 [2024-07-25 14:04:44.782775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x97a2c0) 00:16:35.662 [2024-07-25 14:04:44.782784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.662 [2024-07-25 14:04:44.782796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbf40, cid 4, qid 0 00:16:35.662 [2024-07-25 14:04:44.782846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.662 [2024-07-25 14:04:44.782851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.662 [2024-07-25 14:04:44.782854] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782856] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x97a2c0): datao=0, datal=4096, cccid=4 00:16:35.662 [2024-07-25 14:04:44.782859] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bbf40) on tqpair(0x97a2c0): expected_datao=0, payload_size=4096 00:16:35.662 [2024-07-25 14:04:44.782863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782868] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782871] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.662 [2024-07-25 14:04:44.782882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.662 [2024-07-25 14:04:44.782884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbf40) on tqpair=0x97a2c0 00:16:35.662 [2024-07-25 14:04:44.782896] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:35.662 [2024-07-25 14:04:44.782905] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:35.662 [2024-07-25 14:04:44.782912] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:35.662 [2024-07-25 14:04:44.782917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.782920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x97a2c0) 00:16:35.662 [2024-07-25 14:04:44.782925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.662 [2024-07-25 14:04:44.782937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbf40, cid 4, qid 0 00:16:35.662 [2024-07-25 14:04:44.783003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.662 [2024-07-25 14:04:44.783013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.662 [2024-07-25 14:04:44.783015] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.783018] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x97a2c0): datao=0, datal=4096, cccid=4 00:16:35.662 [2024-07-25 14:04:44.783021] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bbf40) on tqpair(0x97a2c0): expected_datao=0, payload_size=4096 00:16:35.662 [2024-07-25 14:04:44.783025] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.783030] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.783032] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.783038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.662 [2024-07-25 14:04:44.783043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.662 [2024-07-25 14:04:44.783046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.783048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbf40) on tqpair=0x97a2c0 00:16:35.662 [2024-07-25 14:04:44.783062] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:35.662 [2024-07-25 14:04:44.783068] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:35.662 [2024-07-25 14:04:44.783074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.783076] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x97a2c0) 00:16:35.662 [2024-07-25 14:04:44.783082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.662 [2024-07-25 14:04:44.783094] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbf40, cid 4, qid 0 00:16:35.662 [2024-07-25 14:04:44.783145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.662 [2024-07-25 14:04:44.783150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.662 [2024-07-25 14:04:44.783152] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.783155] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x97a2c0): datao=0, datal=4096, cccid=4 00:16:35.662 [2024-07-25 14:04:44.783158] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bbf40) on tqpair(0x97a2c0): expected_datao=0, payload_size=4096 00:16:35.662 [2024-07-25 14:04:44.783160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.783165] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.783168] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.783174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.662 [2024-07-25 14:04:44.783179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.662 [2024-07-25 14:04:44.783181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 14:04:44.783184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbf40) on tqpair=0x97a2c0 00:16:35.663 [2024-07-25 14:04:44.783189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:35.663 [2024-07-25 14:04:44.783195] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:35.663 [2024-07-25 14:04:44.783202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:35.663 [2024-07-25 14:04:44.783207] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:35.663 [2024-07-25 14:04:44.783210] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:35.663 [2024-07-25 14:04:44.783214] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:35.663 [2024-07-25 14:04:44.783218] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:35.663 [2024-07-25 14:04:44.783221] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:35.663 [2024-07-25 14:04:44.783225] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:35.663 [2024-07-25 14:04:44.783241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x97a2c0) 00:16:35.663 [2024-07-25 14:04:44.783250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.663 [2024-07-25 14:04:44.783255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783258] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x97a2c0) 00:16:35.663 [2024-07-25 14:04:44.783265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.663 [2024-07-25 14:04:44.783281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbf40, cid 4, qid 0 00:16:35.663 [2024-07-25 14:04:44.783285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bc0c0, cid 5, qid 0 00:16:35.663 [2024-07-25 14:04:44.783362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.663 [2024-07-25 14:04:44.783368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.663 [2024-07-25 14:04:44.783370] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbf40) on tqpair=0x97a2c0 00:16:35.663 [2024-07-25 14:04:44.783377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.663 [2024-07-25 14:04:44.783382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.663 [2024-07-25 14:04:44.783384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bc0c0) on tqpair=0x97a2c0 00:16:35.663 [2024-07-25 14:04:44.783394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x97a2c0) 00:16:35.663 [2024-07-25 14:04:44.783402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.663 [2024-07-25 14:04:44.783414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bc0c0, cid 5, qid 0 00:16:35.663 [2024-07-25 14:04:44.783454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.663 [2024-07-25 14:04:44.783459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.663 [2024-07-25 14:04:44.783462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783464] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bc0c0) on tqpair=0x97a2c0 00:16:35.663 [2024-07-25 14:04:44.783471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x97a2c0) 00:16:35.663 [2024-07-25 14:04:44.783479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.663 [2024-07-25 14:04:44.783490] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bc0c0, cid 5, qid 0 00:16:35.663 [2024-07-25 14:04:44.783537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.663 [2024-07-25 14:04:44.783542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.663 [2024-07-25 14:04:44.783545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bc0c0) on tqpair=0x97a2c0 00:16:35.663 [2024-07-25 14:04:44.783554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x97a2c0) 00:16:35.663 [2024-07-25 14:04:44.783562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.663 [2024-07-25 14:04:44.783573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bc0c0, cid 5, qid 0 00:16:35.663 [2024-07-25 14:04:44.783610] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.663 [2024-07-25 14:04:44.783615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.663 [2024-07-25 14:04:44.783617] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783620] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bc0c0) on tqpair=0x97a2c0 00:16:35.663 [2024-07-25 14:04:44.783632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x97a2c0) 00:16:35.663 [2024-07-25 14:04:44.783641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.663 [2024-07-25 14:04:44.783646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x97a2c0) 00:16:35.663 [2024-07-25 14:04:44.783654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.663 [2024-07-25 14:04:44.783660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783663] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x97a2c0) 00:16:35.663 [2024-07-25 14:04:44.783667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.663 [2024-07-25 14:04:44.783674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x97a2c0) 00:16:35.663 [2024-07-25 14:04:44.783681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.663 [2024-07-25 14:04:44.783694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bc0c0, cid 5, qid 0 00:16:35.663 [2024-07-25 14:04:44.783698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbf40, cid 4, qid 0 00:16:35.663 [2024-07-25 14:04:44.783702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bc240, cid 6, qid 0 00:16:35.663 [2024-07-25 14:04:44.783705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bc3c0, cid 7, qid 0 00:16:35.663 [2024-07-25 14:04:44.783852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.663 [2024-07-25 14:04:44.783865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.663 [2024-07-25 14:04:44.783868] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783870] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x97a2c0): datao=0, datal=8192, cccid=5 00:16:35.663 [2024-07-25 14:04:44.783874] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bc0c0) on tqpair(0x97a2c0): expected_datao=0, payload_size=8192 00:16:35.663 [2024-07-25 14:04:44.783877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783890] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783893] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.663 [2024-07-25 14:04:44.783902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.663 [2024-07-25 14:04:44.783905] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783907] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x97a2c0): datao=0, datal=512, cccid=4 00:16:35.663 [2024-07-25 14:04:44.783910] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bbf40) on tqpair(0x97a2c0): expected_datao=0, payload_size=512 00:16:35.663 [2024-07-25 14:04:44.783913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783918] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783921] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.663 [2024-07-25 14:04:44.783929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.663 [2024-07-25 14:04:44.783932] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783934] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x97a2c0): datao=0, datal=512, cccid=6 00:16:35.663 [2024-07-25 14:04:44.783937] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bc240) on tqpair(0x97a2c0): expected_datao=0, payload_size=512 00:16:35.663 [2024-07-25 14:04:44.783940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783945] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783947] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.663 [2024-07-25 14:04:44.783956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.663 [2024-07-25 14:04:44.783958] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783960] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x97a2c0): datao=0, datal=4096, cccid=7 00:16:35.663 [2024-07-25 14:04:44.783963] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bc3c0) on tqpair(0x97a2c0): expected_datao=0, payload_size=4096 00:16:35.663 [2024-07-25 14:04:44.783966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783971] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 14:04:44.783974] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 14:04:44.783980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.664 [2024-07-25 14:04:44.783984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.664 [2024-07-25 14:04:44.783986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 14:04:44.783989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bc0c0) on tqpair=0x97a2c0 00:16:35.664 [2024-07-25 14:04:44.784003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.664 [2024-07-25 14:04:44.784007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.664 [2024-07-25 14:04:44.784010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 14:04:44.784012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbf40) on tqpair=0x97a2c0 00:16:35.664 [2024-07-25 14:04:44.784022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.664 [2024-07-25 14:04:44.784027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.664 [2024-07-25 14:04:44.784029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 14:04:44.784032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bc240) on tqpair=0x97a2c0 00:16:35.664 [2024-07-25 14:04:44.784037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.664 [2024-07-25 14:04:44.784042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.664 [2024-07-25 14:04:44.784044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 14:04:44.784047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bc3c0) on tqpair=0x97a2c0 00:16:35.664 ===================================================== 00:16:35.664 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:35.664 ===================================================== 00:16:35.664 Controller Capabilities/Features 00:16:35.664 ================================ 00:16:35.664 Vendor ID: 8086 00:16:35.664 Subsystem Vendor ID: 8086 00:16:35.664 Serial Number: SPDK00000000000001 00:16:35.664 Model Number: SPDK bdev Controller 00:16:35.664 Firmware Version: 24.09 00:16:35.664 Recommended Arb Burst: 6 00:16:35.664 IEEE OUI Identifier: e4 d2 5c 00:16:35.664 Multi-path I/O 00:16:35.664 May have multiple subsystem ports: Yes 00:16:35.664 May have multiple controllers: Yes 00:16:35.664 Associated with SR-IOV VF: No 00:16:35.664 Max Data Transfer Size: 131072 00:16:35.664 Max Number of Namespaces: 32 00:16:35.664 Max Number of I/O Queues: 127 00:16:35.664 NVMe Specification Version (VS): 1.3 00:16:35.664 NVMe Specification Version (Identify): 1.3 00:16:35.664 Maximum Queue Entries: 128 00:16:35.664 Contiguous Queues Required: Yes 00:16:35.664 Arbitration Mechanisms Supported 00:16:35.664 Weighted Round Robin: Not Supported 00:16:35.664 Vendor Specific: Not Supported 00:16:35.664 Reset Timeout: 15000 ms 00:16:35.664 Doorbell Stride: 4 bytes 00:16:35.664 NVM Subsystem Reset: Not Supported 00:16:35.664 Command Sets Supported 00:16:35.664 NVM Command Set: Supported 00:16:35.664 Boot Partition: Not Supported 00:16:35.664 Memory Page Size Minimum: 4096 bytes 00:16:35.664 Memory Page Size Maximum: 4096 bytes 00:16:35.664 Persistent Memory Region: Not Supported 00:16:35.664 Optional Asynchronous Events Supported 00:16:35.664 Namespace Attribute Notices: Supported 00:16:35.664 Firmware Activation Notices: Not Supported 00:16:35.664 ANA Change Notices: Not Supported 00:16:35.664 PLE Aggregate Log Change Notices: Not Supported 00:16:35.664 LBA Status Info Alert Notices: Not Supported 00:16:35.664 EGE Aggregate Log Change Notices: Not Supported 00:16:35.664 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.664 Zone Descriptor Change Notices: Not Supported 00:16:35.664 Discovery Log Change Notices: Not Supported 00:16:35.664 Controller Attributes 00:16:35.664 128-bit Host Identifier: Supported 00:16:35.664 Non-Operational Permissive Mode: Not Supported 00:16:35.664 NVM Sets: Not Supported 00:16:35.664 Read Recovery Levels: Not Supported 00:16:35.664 Endurance Groups: Not Supported 00:16:35.664 Predictable Latency Mode: Not Supported 00:16:35.664 Traffic Based Keep ALive: Not Supported 00:16:35.664 Namespace Granularity: Not Supported 00:16:35.664 SQ Associations: Not Supported 00:16:35.664 UUID List: Not Supported 00:16:35.664 Multi-Domain Subsystem: Not Supported 00:16:35.664 Fixed Capacity Management: Not Supported 00:16:35.664 Variable Capacity Management: Not Supported 00:16:35.664 Delete Endurance Group: Not Supported 00:16:35.664 Delete NVM Set: Not Supported 00:16:35.664 Extended LBA Formats Supported: Not Supported 00:16:35.664 Flexible Data Placement Supported: Not Supported 00:16:35.664 00:16:35.664 Controller Memory Buffer Support 00:16:35.664 ================================ 00:16:35.664 Supported: No 00:16:35.664 00:16:35.664 Persistent Memory Region Support 00:16:35.664 ================================ 00:16:35.664 Supported: No 00:16:35.664 00:16:35.664 Admin Command Set Attributes 00:16:35.664 ============================ 00:16:35.664 Security Send/Receive: Not Supported 00:16:35.664 Format NVM: Not Supported 00:16:35.664 Firmware Activate/Download: Not Supported 00:16:35.664 Namespace Management: Not Supported 00:16:35.664 Device Self-Test: Not Supported 00:16:35.664 Directives: Not Supported 00:16:35.664 NVMe-MI: Not Supported 00:16:35.664 Virtualization Management: Not Supported 00:16:35.664 Doorbell Buffer Config: Not Supported 00:16:35.664 Get LBA Status Capability: Not Supported 00:16:35.664 Command & Feature Lockdown Capability: Not Supported 00:16:35.664 Abort Command Limit: 4 00:16:35.664 Async Event Request Limit: 4 00:16:35.664 Number of Firmware Slots: N/A 00:16:35.664 Firmware Slot 1 Read-Only: N/A 00:16:35.664 Firmware Activation Without Reset: N/A 00:16:35.664 Multiple Update Detection Support: N/A 00:16:35.664 Firmware Update Granularity: No Information Provided 00:16:35.664 Per-Namespace SMART Log: No 00:16:35.664 Asymmetric Namespace Access Log Page: Not Supported 00:16:35.664 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:35.664 Command Effects Log Page: Supported 00:16:35.664 Get Log Page Extended Data: Supported 00:16:35.664 Telemetry Log Pages: Not Supported 00:16:35.664 Persistent Event Log Pages: Not Supported 00:16:35.664 Supported Log Pages Log Page: May Support 00:16:35.664 Commands Supported & Effects Log Page: Not Supported 00:16:35.664 Feature Identifiers & Effects Log Page:May Support 00:16:35.664 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.664 Data Area 4 for Telemetry Log: Not Supported 00:16:35.664 Error Log Page Entries Supported: 128 00:16:35.664 Keep Alive: Supported 00:16:35.664 Keep Alive Granularity: 10000 ms 00:16:35.664 00:16:35.664 NVM Command Set Attributes 00:16:35.664 ========================== 00:16:35.664 Submission Queue Entry Size 00:16:35.664 Max: 64 00:16:35.664 Min: 64 00:16:35.664 Completion Queue Entry Size 00:16:35.664 Max: 16 00:16:35.664 Min: 16 00:16:35.664 Number of Namespaces: 32 00:16:35.664 Compare Command: Supported 00:16:35.664 Write Uncorrectable Command: Not Supported 00:16:35.664 Dataset Management Command: Supported 00:16:35.664 Write Zeroes Command: Supported 00:16:35.664 Set Features Save Field: Not Supported 00:16:35.664 Reservations: Supported 00:16:35.664 Timestamp: Not Supported 00:16:35.664 Copy: Supported 00:16:35.664 Volatile Write Cache: Present 00:16:35.664 Atomic Write Unit (Normal): 1 00:16:35.664 Atomic Write Unit (PFail): 1 00:16:35.664 Atomic Compare & Write Unit: 1 00:16:35.664 Fused Compare & Write: Supported 00:16:35.664 Scatter-Gather List 00:16:35.664 SGL Command Set: Supported 00:16:35.664 SGL Keyed: Supported 00:16:35.664 SGL Bit Bucket Descriptor: Not Supported 00:16:35.664 SGL Metadata Pointer: Not Supported 00:16:35.664 Oversized SGL: Not Supported 00:16:35.664 SGL Metadata Address: Not Supported 00:16:35.664 SGL Offset: Supported 00:16:35.664 Transport SGL Data Block: Not Supported 00:16:35.664 Replay Protected Memory Block: Not Supported 00:16:35.664 00:16:35.664 Firmware Slot Information 00:16:35.664 ========================= 00:16:35.664 Active slot: 1 00:16:35.664 Slot 1 Firmware Revision: 24.09 00:16:35.664 00:16:35.664 00:16:35.664 Commands Supported and Effects 00:16:35.664 ============================== 00:16:35.664 Admin Commands 00:16:35.664 -------------- 00:16:35.664 Get Log Page (02h): Supported 00:16:35.664 Identify (06h): Supported 00:16:35.664 Abort (08h): Supported 00:16:35.664 Set Features (09h): Supported 00:16:35.664 Get Features (0Ah): Supported 00:16:35.665 Asynchronous Event Request (0Ch): Supported 00:16:35.665 Keep Alive (18h): Supported 00:16:35.665 I/O Commands 00:16:35.665 ------------ 00:16:35.665 Flush (00h): Supported LBA-Change 00:16:35.665 Write (01h): Supported LBA-Change 00:16:35.665 Read (02h): Supported 00:16:35.665 Compare (05h): Supported 00:16:35.665 Write Zeroes (08h): Supported LBA-Change 00:16:35.665 Dataset Management (09h): Supported LBA-Change 00:16:35.665 Copy (19h): Supported LBA-Change 00:16:35.665 00:16:35.665 Error Log 00:16:35.665 ========= 00:16:35.665 00:16:35.665 Arbitration 00:16:35.665 =========== 00:16:35.665 Arbitration Burst: 1 00:16:35.665 00:16:35.665 Power Management 00:16:35.665 ================ 00:16:35.665 Number of Power States: 1 00:16:35.665 Current Power State: Power State #0 00:16:35.665 Power State #0: 00:16:35.665 Max Power: 0.00 W 00:16:35.665 Non-Operational State: Operational 00:16:35.665 Entry Latency: Not Reported 00:16:35.665 Exit Latency: Not Reported 00:16:35.665 Relative Read Throughput: 0 00:16:35.665 Relative Read Latency: 0 00:16:35.665 Relative Write Throughput: 0 00:16:35.665 Relative Write Latency: 0 00:16:35.665 Idle Power: Not Reported 00:16:35.665 Active Power: Not Reported 00:16:35.665 Non-Operational Permissive Mode: Not Supported 00:16:35.665 00:16:35.665 Health Information 00:16:35.665 ================== 00:16:35.665 Critical Warnings: 00:16:35.665 Available Spare Space: OK 00:16:35.665 Temperature: OK 00:16:35.665 Device Reliability: OK 00:16:35.665 Read Only: No 00:16:35.665 Volatile Memory Backup: OK 00:16:35.665 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:35.665 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:35.665 Available Spare: 0% 00:16:35.665 Available Spare Threshold: 0% 00:16:35.665 Life Percentage Used:[2024-07-25 14:04:44.784140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x97a2c0) 00:16:35.665 [2024-07-25 14:04:44.784150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.665 [2024-07-25 14:04:44.784165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bc3c0, cid 7, qid 0 00:16:35.665 [2024-07-25 14:04:44.784204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.665 [2024-07-25 14:04:44.784209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.665 [2024-07-25 14:04:44.784212] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bc3c0) on tqpair=0x97a2c0 00:16:35.665 [2024-07-25 14:04:44.784245] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:35.665 [2024-07-25 14:04:44.784252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bb940) on tqpair=0x97a2c0 00:16:35.665 [2024-07-25 14:04:44.784257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.665 [2024-07-25 14:04:44.784261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbac0) on tqpair=0x97a2c0 00:16:35.665 [2024-07-25 14:04:44.784265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.665 [2024-07-25 14:04:44.784268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbc40) on tqpair=0x97a2c0 00:16:35.665 [2024-07-25 14:04:44.784272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.665 [2024-07-25 14:04:44.784275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.665 [2024-07-25 14:04:44.784279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.665 [2024-07-25 14:04:44.784286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.665 [2024-07-25 14:04:44.784309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.665 [2024-07-25 14:04:44.784325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.665 [2024-07-25 14:04:44.784362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.665 [2024-07-25 14:04:44.784367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.665 [2024-07-25 14:04:44.784369] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.665 [2024-07-25 14:04:44.784377] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.665 [2024-07-25 14:04:44.784387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.665 [2024-07-25 14:04:44.784401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.665 [2024-07-25 14:04:44.784469] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.665 [2024-07-25 14:04:44.784474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.665 [2024-07-25 14:04:44.784476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.665 [2024-07-25 14:04:44.784482] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:35.665 [2024-07-25 14:04:44.784485] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:35.665 [2024-07-25 14:04:44.784492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.665 [2024-07-25 14:04:44.784502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.665 [2024-07-25 14:04:44.784514] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.665 [2024-07-25 14:04:44.784557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.665 [2024-07-25 14:04:44.784562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.665 [2024-07-25 14:04:44.784564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.665 [2024-07-25 14:04:44.784575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.665 [2024-07-25 14:04:44.784580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.665 [2024-07-25 14:04:44.784585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.665 [2024-07-25 14:04:44.784596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.666 [2024-07-25 14:04:44.784636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 14:04:44.784641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 14:04:44.784644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.666 [2024-07-25 14:04:44.784653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.666 [2024-07-25 14:04:44.784664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 14:04:44.784675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.666 [2024-07-25 14:04:44.784716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 14:04:44.784721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 14:04:44.784724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.666 [2024-07-25 14:04:44.784733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.666 [2024-07-25 14:04:44.784744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 14:04:44.784755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.666 [2024-07-25 14:04:44.784800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 14:04:44.784805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 14:04:44.784807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.666 [2024-07-25 14:04:44.784817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.666 [2024-07-25 14:04:44.784828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 14:04:44.784839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.666 [2024-07-25 14:04:44.784888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 14:04:44.784893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 14:04:44.784895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.666 [2024-07-25 14:04:44.784905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.666 [2024-07-25 14:04:44.784915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 14:04:44.784926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.666 [2024-07-25 14:04:44.784976] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 14:04:44.784980] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 14:04:44.784983] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.666 [2024-07-25 14:04:44.784993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.784998] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.666 [2024-07-25 14:04:44.785003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 14:04:44.785014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.666 [2024-07-25 14:04:44.785050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 14:04:44.785055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 14:04:44.785057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.785060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.666 [2024-07-25 14:04:44.785068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.785070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.785073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.666 [2024-07-25 14:04:44.785078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 14:04:44.785089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.666 [2024-07-25 14:04:44.785134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 14:04:44.785139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 14:04:44.785141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.785144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.666 [2024-07-25 14:04:44.785151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.785154] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.785157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.666 [2024-07-25 14:04:44.785162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 14:04:44.785172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.666 [2024-07-25 14:04:44.785219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 14:04:44.785224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 14:04:44.785226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.785229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.666 [2024-07-25 14:04:44.785236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.785239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.785241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.666 [2024-07-25 14:04:44.785247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 14:04:44.785258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.666 [2024-07-25 14:04:44.789335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 14:04:44.789355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 14:04:44.789358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.789361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.666 [2024-07-25 14:04:44.789372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.789375] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.789377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x97a2c0) 00:16:35.666 [2024-07-25 14:04:44.789384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 14:04:44.789412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bbdc0, cid 3, qid 0 00:16:35.666 [2024-07-25 14:04:44.789462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 14:04:44.789468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 14:04:44.789470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 14:04:44.789473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bbdc0) on tqpair=0x97a2c0 00:16:35.666 [2024-07-25 14:04:44.789479] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:16:35.666 0% 00:16:35.666 Data Units Read: 0 00:16:35.666 Data Units Written: 0 00:16:35.666 Host Read Commands: 0 00:16:35.666 Host Write Commands: 0 00:16:35.666 Controller Busy Time: 0 minutes 00:16:35.666 Power Cycles: 0 00:16:35.666 Power On Hours: 0 hours 00:16:35.666 Unsafe Shutdowns: 0 00:16:35.666 Unrecoverable Media Errors: 0 00:16:35.666 Lifetime Error Log Entries: 0 00:16:35.666 Warning Temperature Time: 0 minutes 00:16:35.666 Critical Temperature Time: 0 minutes 00:16:35.666 00:16:35.666 Number of Queues 00:16:35.666 ================ 00:16:35.666 Number of I/O Submission Queues: 127 00:16:35.666 Number of I/O Completion Queues: 127 00:16:35.666 00:16:35.666 Active Namespaces 00:16:35.666 ================= 00:16:35.666 Namespace ID:1 00:16:35.666 Error Recovery Timeout: Unlimited 00:16:35.666 Command Set Identifier: NVM (00h) 00:16:35.666 Deallocate: Supported 00:16:35.666 Deallocated/Unwritten Error: Not Supported 00:16:35.666 Deallocated Read Value: Unknown 00:16:35.666 Deallocate in Write Zeroes: Not Supported 00:16:35.666 Deallocated Guard Field: 0xFFFF 00:16:35.666 Flush: Supported 00:16:35.666 Reservation: Supported 00:16:35.666 Namespace Sharing Capabilities: Multiple Controllers 00:16:35.666 Size (in LBAs): 131072 (0GiB) 00:16:35.666 Capacity (in LBAs): 131072 (0GiB) 00:16:35.666 Utilization (in LBAs): 131072 (0GiB) 00:16:35.667 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:35.667 EUI64: ABCDEF0123456789 00:16:35.667 UUID: 16c00bb8-b022-4986-9e37-01dea2f2b027 00:16:35.667 Thin Provisioning: Not Supported 00:16:35.667 Per-NS Atomic Units: Yes 00:16:35.667 Atomic Boundary Size (Normal): 0 00:16:35.667 Atomic Boundary Size (PFail): 0 00:16:35.667 Atomic Boundary Offset: 0 00:16:35.667 Maximum Single Source Range Length: 65535 00:16:35.667 Maximum Copy Length: 65535 00:16:35.667 Maximum Source Range Count: 1 00:16:35.667 NGUID/EUI64 Never Reused: No 00:16:35.667 Namespace Write Protected: No 00:16:35.667 Number of LBA Formats: 1 00:16:35.667 Current LBA Format: LBA Format #00 00:16:35.667 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:35.667 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.667 rmmod nvme_tcp 00:16:35.667 rmmod nvme_fabrics 00:16:35.667 rmmod nvme_keyring 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 73928 ']' 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 73928 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 73928 ']' 00:16:35.667 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 73928 00:16:35.926 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:16:35.926 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.926 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73928 00:16:35.926 killing process with pid 73928 00:16:35.926 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.926 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.926 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73928' 00:16:35.926 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 73928 00:16:35.926 14:04:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 73928 00:16:35.926 14:04:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:35.926 14:04:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:35.926 14:04:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:35.926 14:04:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.926 14:04:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:35.926 14:04:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.926 14:04:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.926 14:04:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:36.185 00:16:36.185 real 0m2.461s 00:16:36.185 user 0m6.558s 00:16:36.185 sys 0m0.700s 00:16:36.185 ************************************ 00:16:36.185 END TEST nvmf_identify 00:16:36.185 ************************************ 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.185 ************************************ 00:16:36.185 START TEST nvmf_perf 00:16:36.185 ************************************ 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:36.185 * Looking for test storage... 00:16:36.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.185 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:36.186 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:36.445 Cannot find device "nvmf_tgt_br" 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.445 Cannot find device "nvmf_tgt_br2" 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:36.445 Cannot find device "nvmf_tgt_br" 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:36.445 Cannot find device "nvmf_tgt_br2" 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:36.445 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:36.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:16:36.725 00:16:36.725 --- 10.0.0.2 ping statistics --- 00:16:36.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.725 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:36.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:36.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:16:36.725 00:16:36.725 --- 10.0.0.3 ping statistics --- 00:16:36.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.725 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:36.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:16:36.725 00:16:36.725 --- 10.0.0.1 ping statistics --- 00:16:36.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.725 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74140 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74140 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74140 ']' 00:16:36.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.725 14:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:36.725 [2024-07-25 14:04:45.997272] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:36.725 [2024-07-25 14:04:45.997355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.982 [2024-07-25 14:04:46.122744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.982 [2024-07-25 14:04:46.226240] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.982 [2024-07-25 14:04:46.226415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.982 [2024-07-25 14:04:46.226461] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.982 [2024-07-25 14:04:46.226492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.982 [2024-07-25 14:04:46.226510] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.982 [2024-07-25 14:04:46.226677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.982 [2024-07-25 14:04:46.227666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.982 [2024-07-25 14:04:46.227743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.982 [2024-07-25 14:04:46.227746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.982 [2024-07-25 14:04:46.272812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:37.918 14:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.918 14:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:16:37.918 14:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:37.918 14:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:37.918 14:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:37.918 14:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.918 14:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:37.918 14:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:38.176 14:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:38.177 14:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:38.435 14:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:38.435 14:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:38.695 14:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:38.695 14:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:38.695 14:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:38.695 14:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:38.695 14:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:38.955 [2024-07-25 14:04:48.020991] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.955 14:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:39.214 14:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:39.214 14:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:39.214 14:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:39.214 14:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:39.473 14:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.732 [2024-07-25 14:04:48.844543] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.732 14:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:39.996 14:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:39.996 14:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:39.996 14:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:39.996 14:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:40.965 Initializing NVMe Controllers 00:16:40.965 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:40.965 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:40.965 Initialization complete. Launching workers. 00:16:40.965 ======================================================== 00:16:40.965 Latency(us) 00:16:40.965 Device Information : IOPS MiB/s Average min max 00:16:40.965 PCIE (0000:00:10.0) NSID 1 from core 0: 24908.85 97.30 1284.34 240.35 7914.52 00:16:40.965 ======================================================== 00:16:40.965 Total : 24908.85 97.30 1284.34 240.35 7914.52 00:16:40.965 00:16:40.965 14:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:42.341 Initializing NVMe Controllers 00:16:42.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:42.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:42.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:42.341 Initialization complete. Launching workers. 00:16:42.341 ======================================================== 00:16:42.341 Latency(us) 00:16:42.341 Device Information : IOPS MiB/s Average min max 00:16:42.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5148.07 20.11 194.04 71.02 4209.47 00:16:42.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.76 0.48 8143.99 7963.22 12014.94 00:16:42.341 ======================================================== 00:16:42.341 Total : 5271.83 20.59 380.67 71.02 12014.94 00:16:42.341 00:16:42.341 14:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:43.716 Initializing NVMe Controllers 00:16:43.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:43.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:43.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:43.716 Initialization complete. Launching workers. 00:16:43.716 ======================================================== 00:16:43.716 Latency(us) 00:16:43.716 Device Information : IOPS MiB/s Average min max 00:16:43.717 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10194.63 39.82 3139.04 522.52 7932.79 00:16:43.717 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3999.46 15.62 8045.03 6154.06 11996.35 00:16:43.717 ======================================================== 00:16:43.717 Total : 14194.10 55.45 4521.40 522.52 11996.35 00:16:43.717 00:16:43.717 14:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:43.717 14:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:46.243 Initializing NVMe Controllers 00:16:46.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:46.243 Controller IO queue size 128, less than required. 00:16:46.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:46.243 Controller IO queue size 128, less than required. 00:16:46.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:46.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:46.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:46.243 Initialization complete. Launching workers. 00:16:46.243 ======================================================== 00:16:46.243 Latency(us) 00:16:46.243 Device Information : IOPS MiB/s Average min max 00:16:46.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2166.58 541.65 60086.28 31343.03 89581.63 00:16:46.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 645.38 161.34 202248.29 47337.37 323200.55 00:16:46.243 ======================================================== 00:16:46.243 Total : 2811.96 702.99 92714.04 31343.03 323200.55 00:16:46.243 00:16:46.243 14:04:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:16:46.502 Initializing NVMe Controllers 00:16:46.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:46.502 Controller IO queue size 128, less than required. 00:16:46.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:46.502 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:46.502 Controller IO queue size 128, less than required. 00:16:46.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:46.502 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:46.502 WARNING: Some requested NVMe devices were skipped 00:16:46.502 No valid NVMe controllers or AIO or URING devices found 00:16:46.502 14:04:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:16:49.043 Initializing NVMe Controllers 00:16:49.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:49.043 Controller IO queue size 128, less than required. 00:16:49.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:49.043 Controller IO queue size 128, less than required. 00:16:49.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:49.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:49.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:49.043 Initialization complete. Launching workers. 00:16:49.043 00:16:49.043 ==================== 00:16:49.043 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:49.043 TCP transport: 00:16:49.043 polls: 17557 00:16:49.043 idle_polls: 12501 00:16:49.043 sock_completions: 5056 00:16:49.043 nvme_completions: 7169 00:16:49.043 submitted_requests: 10782 00:16:49.043 queued_requests: 1 00:16:49.043 00:16:49.043 ==================== 00:16:49.043 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:49.043 TCP transport: 00:16:49.043 polls: 22234 00:16:49.043 idle_polls: 17776 00:16:49.043 sock_completions: 4458 00:16:49.043 nvme_completions: 6733 00:16:49.043 submitted_requests: 10186 00:16:49.043 queued_requests: 1 00:16:49.043 ======================================================== 00:16:49.043 Latency(us) 00:16:49.043 Device Information : IOPS MiB/s Average min max 00:16:49.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1791.83 447.96 72428.33 35150.72 123716.90 00:16:49.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1682.84 420.71 77043.71 29813.60 142404.03 00:16:49.043 ======================================================== 00:16:49.043 Total : 3474.68 868.67 74663.63 29813.60 142404.03 00:16:49.043 00:16:49.043 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:49.043 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.303 rmmod nvme_tcp 00:16:49.303 rmmod nvme_fabrics 00:16:49.303 rmmod nvme_keyring 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74140 ']' 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74140 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74140 ']' 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74140 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74140 00:16:49.303 killing process with pid 74140 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74140' 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74140 00:16:49.303 14:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74140 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:50.238 00:16:50.238 real 0m14.026s 00:16:50.238 user 0m51.045s 00:16:50.238 sys 0m3.775s 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:50.238 ************************************ 00:16:50.238 END TEST nvmf_perf 00:16:50.238 ************************************ 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.238 ************************************ 00:16:50.238 START TEST nvmf_fio_host 00:16:50.238 ************************************ 00:16:50.238 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:50.496 * Looking for test storage... 00:16:50.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:50.496 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.496 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.496 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.496 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.496 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:50.497 Cannot find device "nvmf_tgt_br" 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.497 Cannot find device "nvmf_tgt_br2" 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:50.497 Cannot find device "nvmf_tgt_br" 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:50.497 Cannot find device "nvmf_tgt_br2" 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:50.497 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:50.498 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:50.755 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:50.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:16:50.755 00:16:50.755 --- 10.0.0.2 ping statistics --- 00:16:50.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.756 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:50.756 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:50.756 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:50.756 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:16:50.756 00:16:50.756 --- 10.0.0.3 ping statistics --- 00:16:50.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.756 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:50.756 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:50.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:16:50.756 00:16:50.756 --- 10.0.0.1 ping statistics --- 00:16:50.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.756 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:50.756 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.756 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:16:50.756 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:50.756 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.756 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:50.756 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:50.756 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.756 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:50.756 14:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74550 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74550 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 74550 ']' 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:50.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:50.756 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.013 [2024-07-25 14:05:00.077930] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:51.013 [2024-07-25 14:05:00.078088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.013 [2024-07-25 14:05:00.219251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.275 [2024-07-25 14:05:00.369077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.275 [2024-07-25 14:05:00.369229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.275 [2024-07-25 14:05:00.369273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.275 [2024-07-25 14:05:00.369323] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.275 [2024-07-25 14:05:00.369342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.275 [2024-07-25 14:05:00.369498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.275 [2024-07-25 14:05:00.369754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.275 [2024-07-25 14:05:00.369757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.275 [2024-07-25 14:05:00.369649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.275 [2024-07-25 14:05:00.446692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:51.841 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:51.841 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:16:51.841 14:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:52.098 [2024-07-25 14:05:01.171890] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.098 14:05:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:52.098 14:05:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.099 14:05:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.099 14:05:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:52.357 Malloc1 00:16:52.357 14:05:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:52.615 14:05:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:52.874 14:05:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.874 [2024-07-25 14:05:02.121021] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.874 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:53.131 14:05:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:53.393 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:53.393 fio-3.35 00:16:53.393 Starting 1 thread 00:16:55.928 00:16:55.928 test: (groupid=0, jobs=1): err= 0: pid=74630: Thu Jul 25 14:05:04 2024 00:16:55.928 read: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(78.8MiB/2005msec) 00:16:55.928 slat (nsec): min=1573, max=441727, avg=1901.86, stdev=4015.54 00:16:55.928 clat (usec): min=3579, max=11688, avg=6635.43, stdev=617.70 00:16:55.928 lat (usec): min=3624, max=11690, avg=6637.33, stdev=617.62 00:16:55.928 clat percentiles (usec): 00:16:55.928 | 1.00th=[ 5407], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:16:55.928 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6783], 00:16:55.928 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7373], 95.00th=[ 7635], 00:16:55.928 | 99.00th=[ 8225], 99.50th=[ 8717], 99.90th=[10683], 99.95th=[11338], 00:16:55.928 | 99.99th=[11600] 00:16:55.928 bw ( KiB/s): min=39024, max=41400, per=99.88%, avg=40180.00, stdev=1245.61, samples=4 00:16:55.928 iops : min= 9756, max=10350, avg=10045.00, stdev=311.40, samples=4 00:16:55.928 write: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(78.8MiB/2005msec); 0 zone resets 00:16:55.928 slat (nsec): min=1612, max=323826, avg=1961.29, stdev=2659.36 00:16:55.928 clat (usec): min=3441, max=10931, avg=6016.12, stdev=548.70 00:16:55.928 lat (usec): min=3460, max=10933, avg=6018.08, stdev=548.74 00:16:55.928 clat percentiles (usec): 00:16:55.928 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:16:55.928 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6128], 00:16:55.928 | 70.00th=[ 6259], 80.00th=[ 6456], 90.00th=[ 6718], 95.00th=[ 6915], 00:16:55.928 | 99.00th=[ 7504], 99.50th=[ 7898], 99.90th=[ 8848], 99.95th=[ 9765], 00:16:55.928 | 99.99th=[10814] 00:16:55.929 bw ( KiB/s): min=39296, max=41208, per=99.99%, avg=40242.00, stdev=925.13, samples=4 00:16:55.929 iops : min= 9824, max=10302, avg=10060.50, stdev=231.28, samples=4 00:16:55.929 lat (msec) : 4=0.03%, 10=99.87%, 20=0.10% 00:16:55.929 cpu : usr=75.55%, sys=19.11%, ctx=42, majf=0, minf=6 00:16:55.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:55.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.929 issued rwts: total=20164,20174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.929 00:16:55.929 Run status group 0 (all jobs): 00:16:55.929 READ: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=78.8MiB (82.6MB), run=2005-2005msec 00:16:55.929 WRITE: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=78.8MiB (82.6MB), run=2005-2005msec 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:55.929 14:05:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:55.929 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:55.929 fio-3.35 00:16:55.929 Starting 1 thread 00:16:58.472 00:16:58.472 test: (groupid=0, jobs=1): err= 0: pid=74679: Thu Jul 25 14:05:07 2024 00:16:58.472 read: IOPS=8423, BW=132MiB/s (138MB/s)(264MiB/2008msec) 00:16:58.472 slat (nsec): min=2432, max=88600, avg=3201.01, stdev=1769.43 00:16:58.472 clat (usec): min=1941, max=18192, avg=8756.33, stdev=2383.33 00:16:58.472 lat (usec): min=1943, max=18195, avg=8759.53, stdev=2383.45 00:16:58.472 clat percentiles (usec): 00:16:58.472 | 1.00th=[ 3884], 5.00th=[ 4883], 10.00th=[ 5538], 20.00th=[ 6652], 00:16:58.472 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:16:58.472 | 70.00th=[ 9896], 80.00th=[10945], 90.00th=[11863], 95.00th=[12649], 00:16:58.472 | 99.00th=[14877], 99.50th=[15270], 99.90th=[15664], 99.95th=[15795], 00:16:58.472 | 99.99th=[17433] 00:16:58.472 bw ( KiB/s): min=58944, max=83616, per=52.20%, avg=70360.00, stdev=12727.48, samples=4 00:16:58.472 iops : min= 3684, max= 5226, avg=4397.50, stdev=795.47, samples=4 00:16:58.472 write: IOPS=5069, BW=79.2MiB/s (83.1MB/s)(144MiB/1815msec); 0 zone resets 00:16:58.472 slat (usec): min=27, max=536, avg=35.46, stdev=11.59 00:16:58.472 clat (usec): min=5114, max=20799, avg=11153.72, stdev=2252.15 00:16:58.472 lat (usec): min=5144, max=20830, avg=11189.18, stdev=2255.03 00:16:58.472 clat percentiles (usec): 00:16:58.472 | 1.00th=[ 6980], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9372], 00:16:58.472 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10683], 60.00th=[11207], 00:16:58.472 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14484], 95.00th=[15139], 00:16:58.472 | 99.00th=[17171], 99.50th=[17957], 99.90th=[20055], 99.95th=[20579], 00:16:58.472 | 99.99th=[20841] 00:16:58.472 bw ( KiB/s): min=61472, max=85824, per=89.89%, avg=72920.00, stdev=12189.14, samples=4 00:16:58.472 iops : min= 3842, max= 5364, avg=4557.50, stdev=761.82, samples=4 00:16:58.472 lat (msec) : 2=0.01%, 4=0.79%, 10=57.76%, 20=41.40%, 50=0.04% 00:16:58.472 cpu : usr=82.36%, sys=14.00%, ctx=4, majf=0, minf=16 00:16:58.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:58.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.472 issued rwts: total=16915,9202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.472 00:16:58.472 Run status group 0 (all jobs): 00:16:58.472 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=264MiB (277MB), run=2008-2008msec 00:16:58.472 WRITE: bw=79.2MiB/s (83.1MB/s), 79.2MiB/s-79.2MiB/s (83.1MB/s-83.1MB/s), io=144MiB (151MB), run=1815-1815msec 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.472 rmmod nvme_tcp 00:16:58.472 rmmod nvme_fabrics 00:16:58.472 rmmod nvme_keyring 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 74550 ']' 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 74550 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 74550 ']' 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 74550 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74550 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:58.472 killing process with pid 74550 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74550' 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 74550 00:16:58.472 14:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 74550 00:16:59.038 14:05:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:59.038 14:05:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:59.038 14:05:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:59.038 14:05:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.038 14:05:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:59.038 14:05:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.038 14:05:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.038 14:05:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.038 14:05:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:59.038 ************************************ 00:16:59.038 END TEST nvmf_fio_host 00:16:59.038 ************************************ 00:16:59.038 00:16:59.038 real 0m8.715s 00:16:59.038 user 0m34.957s 00:16:59.038 sys 0m2.322s 00:16:59.038 14:05:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:59.038 14:05:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.038 14:05:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:59.039 14:05:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:59.039 14:05:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:59.039 14:05:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.039 ************************************ 00:16:59.039 START TEST nvmf_failover 00:16:59.039 ************************************ 00:16:59.039 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:59.039 * Looking for test storage... 00:16:59.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:59.039 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:59.039 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:59.297 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:59.298 Cannot find device "nvmf_tgt_br" 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:59.298 Cannot find device "nvmf_tgt_br2" 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:59.298 Cannot find device "nvmf_tgt_br" 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:59.298 Cannot find device "nvmf_tgt_br2" 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:59.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:59.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:59.298 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:59.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:16:59.556 00:16:59.556 --- 10.0.0.2 ping statistics --- 00:16:59.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.556 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:59.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:59.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:16:59.556 00:16:59.556 --- 10.0.0.3 ping statistics --- 00:16:59.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.556 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:59.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:16:59.556 00:16:59.556 --- 10.0.0.1 ping statistics --- 00:16:59.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.556 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:59.556 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=74885 00:16:59.557 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 74885 00:16:59.557 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:59.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.557 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74885 ']' 00:16:59.557 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.557 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:59.557 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.557 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:59.557 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:59.557 [2024-07-25 14:05:08.833267] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:16:59.557 [2024-07-25 14:05:08.833360] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.814 [2024-07-25 14:05:08.973154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:59.814 [2024-07-25 14:05:09.079417] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.814 [2024-07-25 14:05:09.079551] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.814 [2024-07-25 14:05:09.079595] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.814 [2024-07-25 14:05:09.079624] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.814 [2024-07-25 14:05:09.079651] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.814 [2024-07-25 14:05:09.079994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.814 [2024-07-25 14:05:09.079901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.814 [2024-07-25 14:05:09.079996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.073 [2024-07-25 14:05:09.124551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:00.640 14:05:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:00.640 14:05:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:17:00.640 14:05:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:00.640 14:05:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:00.640 14:05:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:00.640 14:05:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.640 14:05:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:00.640 [2024-07-25 14:05:09.936560] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.898 14:05:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:00.898 Malloc0 00:17:00.898 14:05:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:01.157 14:05:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:01.420 14:05:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.685 [2024-07-25 14:05:10.823488] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.685 14:05:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:01.945 [2024-07-25 14:05:11.059121] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:01.945 14:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:02.204 [2024-07-25 14:05:11.266973] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:02.204 14:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:02.204 14:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74951 00:17:02.204 14:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:02.204 14:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74951 /var/tmp/bdevperf.sock 00:17:02.204 14:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74951 ']' 00:17:02.204 14:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.204 14:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.204 14:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.204 14:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.204 14:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:03.142 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:03.142 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:17:03.142 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:03.401 NVMe0n1 00:17:03.401 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:03.660 00:17:03.660 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=74973 00:17:03.660 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:03.660 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:04.599 14:05:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.878 14:05:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:08.191 14:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:08.191 00:17:08.191 14:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:08.450 14:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:11.791 14:05:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.791 [2024-07-25 14:05:20.746131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.791 14:05:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:12.726 14:05:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:12.726 14:05:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 74973 00:17:19.349 0 00:17:19.349 14:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 74951 00:17:19.349 14:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74951 ']' 00:17:19.349 14:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74951 00:17:19.349 14:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:17:19.349 14:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:19.349 14:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74951 00:17:19.349 killing process with pid 74951 00:17:19.349 14:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:19.349 14:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:19.349 14:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74951' 00:17:19.349 14:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74951 00:17:19.349 14:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74951 00:17:19.349 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:19.349 [2024-07-25 14:05:11.323152] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:17:19.349 [2024-07-25 14:05:11.323241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74951 ] 00:17:19.349 [2024-07-25 14:05:11.460537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.349 [2024-07-25 14:05:11.607603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.349 [2024-07-25 14:05:11.684370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:19.349 Running I/O for 15 seconds... 00:17:19.349 [2024-07-25 14:05:14.003403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.003918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.004012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.004064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.004106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.004152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.004199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.004239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.004277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.004358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.004401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.004442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.004480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.004522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.004561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.004605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.004640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.004682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.004719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.004761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.004795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.004870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.004905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.004949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.004983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.005023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.005061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.005109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.005150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.005201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.005235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.005271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.005309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.005364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.005405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.005455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.005489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.005537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.005585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.005655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.005708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.005747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.005786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.005834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.005871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.005914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.005975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.349 [2024-07-25 14:05:14.006018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.006061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.006108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.006146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.006190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.006232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.006270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.006322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.006371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.006414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.006466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.006508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.006553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.349 [2024-07-25 14:05:14.006594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.349 [2024-07-25 14:05:14.006639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.006677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.006721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.006773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.006812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.006846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.006885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.006918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.006951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.006989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.007078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.007152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.007224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.007300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.007394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.007470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.007546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.007621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.007698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.007776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.007854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.007931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.007968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.008051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.008128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.008215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.008304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.008381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.008457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.008538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.350 [2024-07-25 14:05:14.008629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.008703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.008773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.008842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.008919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.008951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.008986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.009028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.009066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.009101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.009135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.009170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.009208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.009240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.009278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.009321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.009360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.009398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.009437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.009472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.009510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.009549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.009606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.350 [2024-07-25 14:05:14.009644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.350 [2024-07-25 14:05:14.009684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.009721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.351 [2024-07-25 14:05:14.009758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.009795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.351 [2024-07-25 14:05:14.009834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.009867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.009916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.009954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.010943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.010991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.011065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.351 [2024-07-25 14:05:14.011138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.351 [2024-07-25 14:05:14.011205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.351 [2024-07-25 14:05:14.011274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.351 [2024-07-25 14:05:14.011365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.351 [2024-07-25 14:05:14.011436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.351 [2024-07-25 14:05:14.011515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.351 [2024-07-25 14:05:14.011591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.351 [2024-07-25 14:05:14.011666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.011740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.011845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.011920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.011970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.012011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.012049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.012092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.012134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.012174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.012208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.351 [2024-07-25 14:05:14.012246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.012295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bad70 is same with the state(5) to be set 00:17:19.351 [2024-07-25 14:05:14.012348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.351 [2024-07-25 14:05:14.012382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.351 [2024-07-25 14:05:14.012420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86688 len:8 PRP1 0x0 PRP2 0x0 00:17:19.351 [2024-07-25 14:05:14.012455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.012491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.351 [2024-07-25 14:05:14.012541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.351 [2024-07-25 14:05:14.012576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87016 len:8 PRP1 0x0 PRP2 0x0 00:17:19.351 [2024-07-25 14:05:14.012610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.012646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.351 [2024-07-25 14:05:14.012676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.351 [2024-07-25 14:05:14.012706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87024 len:8 PRP1 0x0 PRP2 0x0 00:17:19.351 [2024-07-25 14:05:14.012751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.351 [2024-07-25 14:05:14.012783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.351 [2024-07-25 14:05:14.012817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.012851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87032 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.012890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.012923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.012952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.012986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87040 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.013026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.013065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.013102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.013136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87048 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.013177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.013214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.013248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.013283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87056 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.013333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.013367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.013396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.013431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87064 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.013468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.013500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.013530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.013589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87072 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.013639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.013680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.013714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.013752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87080 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.013793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.013837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.013875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.013909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87088 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.013961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.013994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.014028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.014062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87096 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.014101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.014137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.014166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.014199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87104 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.014245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.014281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.014333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.014371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87112 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.014408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.014444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.014474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.014509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87120 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.014547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.014583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.014619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.014652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87128 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.014689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.014720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.014753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.014782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87136 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.014818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.014849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.014882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.014916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87144 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.014955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.014987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.015020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.015055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87152 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.015093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.015125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.015158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.015188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87160 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.015230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.015266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.015320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.015358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87168 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.015399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.015432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.015466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.015495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87176 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.015534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.015570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.015603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.015633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87184 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.015671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.015703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.015732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.015766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87192 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.015798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.015835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.352 [2024-07-25 14:05:14.015972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.352 [2024-07-25 14:05:14.016010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87200 len:8 PRP1 0x0 PRP2 0x0 00:17:19.352 [2024-07-25 14:05:14.016048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.352 [2024-07-25 14:05:14.016153] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10bad70 was disconnected and freed. reset controller. 00:17:19.352 [2024-07-25 14:05:14.016195] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:19.352 [2024-07-25 14:05:14.016310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.352 [2024-07-25 14:05:14.016379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:14.016417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.353 [2024-07-25 14:05:14.016455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:14.016496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.353 [2024-07-25 14:05:14.016536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:14.016569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.353 [2024-07-25 14:05:14.016607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:14.016655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:19.353 [2024-07-25 14:05:14.016751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104b570 (9): Bad file descriptor 00:17:19.353 [2024-07-25 14:05:14.019614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:19.353 [2024-07-25 14:05:14.049078] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:19.353 [2024-07-25 14:05:17.519145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.353 [2024-07-25 14:05:17.519243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.519269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.353 [2024-07-25 14:05:17.519280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.519291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.353 [2024-07-25 14:05:17.519301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.519338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.353 [2024-07-25 14:05:17.519348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.519359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b570 is same with the state(5) to be set 00:17:19.353 [2024-07-25 14:05:17.520334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.353 [2024-07-25 14:05:17.520361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.353 [2024-07-25 14:05:17.520393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.353 [2024-07-25 14:05:17.520416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.353 [2024-07-25 14:05:17.520439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.353 [2024-07-25 14:05:17.520460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.353 [2024-07-25 14:05:17.520482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.353 [2024-07-25 14:05:17.520538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.353 [2024-07-25 14:05:17.520560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.353 [2024-07-25 14:05:17.520851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.353 [2024-07-25 14:05:17.520862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.520873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.520883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.520894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.520904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.520916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.520926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.520937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.520947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.520959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.520969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.520980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.520990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.354 [2024-07-25 14:05:17.521476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.354 [2024-07-25 14:05:17.521741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.354 [2024-07-25 14:05:17.521751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.521764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.521774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.521786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.521796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.521807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.521817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.521830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.521840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.521853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.521863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.521875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.521885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.521896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.521906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.521917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.521927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.521939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.521949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.521961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.521970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.521987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.521997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.522019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.522430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.522451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.522473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.522495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.522520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.522542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.522564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.355 [2024-07-25 14:05:17.522590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.355 [2024-07-25 14:05:17.522645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.355 [2024-07-25 14:05:17.522655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.356 [2024-07-25 14:05:17.522677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.356 [2024-07-25 14:05:17.522699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.356 [2024-07-25 14:05:17.522732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.356 [2024-07-25 14:05:17.522752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.356 [2024-07-25 14:05:17.522772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.356 [2024-07-25 14:05:17.522792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.356 [2024-07-25 14:05:17.522812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.356 [2024-07-25 14:05:17.522832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.356 [2024-07-25 14:05:17.522852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.356 [2024-07-25 14:05:17.522879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.356 [2024-07-25 14:05:17.522900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.356 [2024-07-25 14:05:17.522920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bc760 is same with the state(5) to be set 00:17:19.356 [2024-07-25 14:05:17.522950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.522957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.522964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1896 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.522973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.522984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.522991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.522997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2352 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2360 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2376 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2384 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2392 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2408 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2416 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2424 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2440 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2448 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2456 len:8 PRP1 0x0 PRP2 0x0 00:17:19.356 [2024-07-25 14:05:17.523553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.356 [2024-07-25 14:05:17.523562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.356 [2024-07-25 14:05:17.523568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.356 [2024-07-25 14:05:17.523575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:8 PRP1 0x0 PRP2 0x0 00:17:19.357 [2024-07-25 14:05:17.523584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:17.523596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.357 [2024-07-25 14:05:17.523602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.357 [2024-07-25 14:05:17.523609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2472 len:8 PRP1 0x0 PRP2 0x0 00:17:19.357 [2024-07-25 14:05:17.523618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:17.523686] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10bc760 was disconnected and freed. reset controller. 00:17:19.357 [2024-07-25 14:05:17.523701] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:19.357 [2024-07-25 14:05:17.523712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:19.357 [2024-07-25 14:05:17.526909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:19.357 [2024-07-25 14:05:17.526949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104b570 (9): Bad file descriptor 00:17:19.357 [2024-07-25 14:05:17.562663] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:19.357 [2024-07-25 14:05:21.985447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.985922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.357 [2024-07-25 14:05:21.985944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.357 [2024-07-25 14:05:21.985977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.985990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.357 [2024-07-25 14:05:21.986000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.357 [2024-07-25 14:05:21.986022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.357 [2024-07-25 14:05:21.986043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.357 [2024-07-25 14:05:21.986065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.357 [2024-07-25 14:05:21.986086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.357 [2024-07-25 14:05:21.986108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.986130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.986151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.986173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.986194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.986215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.986242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.986263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.986284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.986306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.357 [2024-07-25 14:05:21.986342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.357 [2024-07-25 14:05:21.986354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.986848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.986983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.986993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.987004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.358 [2024-07-25 14:05:21.987014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.987026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.987036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.987047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.987058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.987070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.987079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.358 [2024-07-25 14:05:21.987095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.358 [2024-07-25 14:05:21.987105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:19.359 [2024-07-25 14:05:21.987710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.359 [2024-07-25 14:05:21.987936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.359 [2024-07-25 14:05:21.987945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.987956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.360 [2024-07-25 14:05:21.987965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.987976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.360 [2024-07-25 14:05:21.987986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.987997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.360 [2024-07-25 14:05:21.988006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.360 [2024-07-25 14:05:21.988028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bcd50 is same with the state(5) to be set 00:17:19.360 [2024-07-25 14:05:21.988052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127248 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127704 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127712 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127720 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127728 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127736 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127744 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127752 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127760 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127768 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127776 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127784 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127792 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127800 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127808 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127816 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:19.360 [2024-07-25 14:05:21.988656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:19.360 [2024-07-25 14:05:21.988663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127824 len:8 PRP1 0x0 PRP2 0x0 00:17:19.360 [2024-07-25 14:05:21.988672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:21.988741] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10bcd50 was disconnected and freed. reset controller. 00:17:19.360 [2024-07-25 14:05:21.988757] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:19.360 [2024-07-25 14:05:21.988815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.360 [2024-07-25 14:05:22.004621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:22.004672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.360 [2024-07-25 14:05:22.004686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:22.004699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.360 [2024-07-25 14:05:22.004733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.360 [2024-07-25 14:05:22.004747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.361 [2024-07-25 14:05:22.004760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.361 [2024-07-25 14:05:22.004773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:19.361 [2024-07-25 14:05:22.004867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104b570 (9): Bad file descriptor 00:17:19.361 [2024-07-25 14:05:22.011541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:19.361 [2024-07-25 14:05:22.046438] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:19.361 00:17:19.361 Latency(us) 00:17:19.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.361 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:19.361 Verification LBA range: start 0x0 length 0x4000 00:17:19.361 NVMe0n1 : 15.01 10553.60 41.23 249.27 0.00 11822.37 479.36 26214.40 00:17:19.361 =================================================================================================================== 00:17:19.361 Total : 10553.60 41.23 249.27 0.00 11822.37 479.36 26214.40 00:17:19.361 Received shutdown signal, test time was about 15.000000 seconds 00:17:19.361 00:17:19.361 Latency(us) 00:17:19.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.361 =================================================================================================================== 00:17:19.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.361 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:19.361 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:19.361 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:19.361 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:19.361 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75148 00:17:19.361 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75148 /var/tmp/bdevperf.sock 00:17:19.361 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75148 ']' 00:17:19.361 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.361 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:19.361 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.361 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:19.361 14:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:19.928 14:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:19.928 14:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:17:19.928 14:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:20.187 [2024-07-25 14:05:29.309175] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:20.187 14:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:20.445 [2024-07-25 14:05:29.520923] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:20.445 14:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:20.704 NVMe0n1 00:17:20.704 14:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:20.963 00:17:20.963 14:05:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:21.222 00:17:21.222 14:05:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:21.222 14:05:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:21.482 14:05:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:21.742 14:05:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:25.026 14:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:25.026 14:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:25.026 14:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:25.026 14:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75225 00:17:25.026 14:05:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75225 00:17:25.963 0 00:17:25.963 14:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:25.963 [2024-07-25 14:05:28.197772] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:17:25.963 [2024-07-25 14:05:28.197848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75148 ] 00:17:25.963 [2024-07-25 14:05:28.322423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.963 [2024-07-25 14:05:28.425744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.963 [2024-07-25 14:05:28.468192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:25.963 [2024-07-25 14:05:30.774873] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:25.963 [2024-07-25 14:05:30.775005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.963 [2024-07-25 14:05:30.775025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.963 [2024-07-25 14:05:30.775041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.963 [2024-07-25 14:05:30.775053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.963 [2024-07-25 14:05:30.775066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.963 [2024-07-25 14:05:30.775077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.963 [2024-07-25 14:05:30.775089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.963 [2024-07-25 14:05:30.775102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.963 [2024-07-25 14:05:30.775114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:25.963 [2024-07-25 14:05:30.775179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:25.963 [2024-07-25 14:05:30.775206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2d570 (9): Bad file descriptor 00:17:25.963 [2024-07-25 14:05:30.783718] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:25.963 Running I/O for 1 seconds... 00:17:25.963 00:17:25.963 Latency(us) 00:17:25.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.963 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:25.963 Verification LBA range: start 0x0 length 0x4000 00:17:25.963 NVMe0n1 : 1.01 8748.47 34.17 0.00 0.00 14581.49 1974.67 12076.94 00:17:25.963 =================================================================================================================== 00:17:25.963 Total : 8748.47 34.17 0.00 0.00 14581.49 1974.67 12076.94 00:17:25.963 14:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:25.963 14:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:26.222 14:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:26.482 14:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:26.482 14:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:26.482 14:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:26.740 14:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:30.029 14:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:30.029 14:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:30.029 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75148 00:17:30.029 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75148 ']' 00:17:30.029 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75148 00:17:30.029 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:17:30.029 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.029 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75148 00:17:30.029 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:30.029 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:30.029 killing process with pid 75148 00:17:30.029 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75148' 00:17:30.029 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75148 00:17:30.029 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75148 00:17:30.288 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:30.288 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.547 rmmod nvme_tcp 00:17:30.547 rmmod nvme_fabrics 00:17:30.547 rmmod nvme_keyring 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 74885 ']' 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 74885 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74885 ']' 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74885 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74885 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:30.547 killing process with pid 74885 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74885' 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74885 00:17:30.547 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74885 00:17:30.806 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.806 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.806 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.806 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.806 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.806 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.806 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.806 14:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.806 14:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:30.806 00:17:30.806 real 0m31.836s 00:17:30.806 user 2m2.430s 00:17:30.806 sys 0m5.316s 00:17:30.806 14:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:30.806 14:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:30.806 ************************************ 00:17:30.806 END TEST nvmf_failover 00:17:30.806 ************************************ 00:17:30.806 14:05:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:30.806 14:05:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:30.806 14:05:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:30.806 14:05:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.065 ************************************ 00:17:31.065 START TEST nvmf_host_discovery 00:17:31.065 ************************************ 00:17:31.065 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:31.065 * Looking for test storage... 00:17:31.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:31.065 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:31.065 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:31.065 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.065 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:31.066 Cannot find device "nvmf_tgt_br" 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:31.066 Cannot find device "nvmf_tgt_br2" 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:31.066 Cannot find device "nvmf_tgt_br" 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:17:31.066 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:31.325 Cannot find device "nvmf_tgt_br2" 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:31.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:31.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:31.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:17:31.325 00:17:31.325 --- 10.0.0.2 ping statistics --- 00:17:31.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.325 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:31.325 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:31.325 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:17:31.325 00:17:31.325 --- 10.0.0.3 ping statistics --- 00:17:31.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.325 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:31.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:17:31.325 00:17:31.325 --- 10.0.0.1 ping statistics --- 00:17:31.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.325 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:31.325 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=75487 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 75487 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75487 ']' 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.583 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.583 [2024-07-25 14:05:40.706080] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:17:31.583 [2024-07-25 14:05:40.706150] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.583 [2024-07-25 14:05:40.843600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.841 [2024-07-25 14:05:40.941659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.841 [2024-07-25 14:05:40.941736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.841 [2024-07-25 14:05:40.941747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.841 [2024-07-25 14:05:40.941754] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.841 [2024-07-25 14:05:40.941761] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.841 [2024-07-25 14:05:40.941801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.841 [2024-07-25 14:05:40.984084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.408 [2024-07-25 14:05:41.584318] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.408 [2024-07-25 14:05:41.596367] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.408 null0 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.408 null1 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75519 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75519 /tmp/host.sock 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75519 ']' 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.408 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.408 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.408 [2024-07-25 14:05:41.690516] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:17:32.408 [2024-07-25 14:05:41.690593] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75519 ] 00:17:32.694 [2024-07-25 14:05:41.827195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.694 [2024-07-25 14:05:41.930410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.694 [2024-07-25 14:05:41.972211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.630 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:33.631 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.890 [2024-07-25 14:05:42.950181] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:33.890 14:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:33.890 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.890 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:33.890 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:33.890 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:33.890 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:33.890 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:33.890 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:33.890 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:33.890 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:33.890 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:17:33.891 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:17:34.459 [2024-07-25 14:05:43.592959] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:34.459 [2024-07-25 14:05:43.592995] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:34.459 [2024-07-25 14:05:43.593006] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:34.459 [2024-07-25 14:05:43.598978] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:34.459 [2024-07-25 14:05:43.655646] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:34.460 [2024-07-25 14:05:43.655689] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:35.029 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.290 [2024-07-25 14:05:44.476464] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:35.290 [2024-07-25 14:05:44.476873] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:35.290 [2024-07-25 14:05:44.476902] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:17:35.290 [2024-07-25 14:05:44.482854] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.290 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:35.291 [2024-07-25 14:05:44.544980] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:35.291 [2024-07-25 14:05:44.545008] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:35.291 [2024-07-25 14:05:44.545013] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.291 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.552 [2024-07-25 14:05:44.680656] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:35.552 [2024-07-25 14:05:44.680692] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:35.552 [2024-07-25 14:05:44.681257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.552 [2024-07-25 14:05:44.681286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.552 [2024-07-25 14:05:44.681295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.552 [2024-07-25 14:05:44.681313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.552 [2024-07-25 14:05:44.681321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.552 [2024-07-25 14:05:44.681327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.552 [2024-07-25 14:05:44.681334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.552 [2024-07-25 14:05:44.681340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.552 [2024-07-25 14:05:44.681346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e5620 is same with the state(5) to be set 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:35.552 [2024-07-25 14:05:44.686642] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:35.552 [2024-07-25 14:05:44.686676] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:35.552 [2024-07-25 14:05:44.686736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e5620 (9): Bad file descriptor 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:35.552 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:35.553 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:35.553 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:35.553 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.553 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.553 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:35.553 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:17:35.553 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:35.553 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.553 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:35.553 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:35.813 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.813 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.815 [2024-07-25 14:05:46.068874] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:36.815 [2024-07-25 14:05:46.068917] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:36.815 [2024-07-25 14:05:46.068933] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:36.815 [2024-07-25 14:05:46.074896] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:37.074 [2024-07-25 14:05:46.135185] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:37.074 [2024-07-25 14:05:46.135248] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.074 request: 00:17:37.074 { 00:17:37.074 "name": "nvme", 00:17:37.074 "trtype": "tcp", 00:17:37.074 "traddr": "10.0.0.2", 00:17:37.074 "adrfam": "ipv4", 00:17:37.074 "trsvcid": "8009", 00:17:37.074 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:37.074 "wait_for_attach": true, 00:17:37.074 "method": "bdev_nvme_start_discovery", 00:17:37.074 "req_id": 1 00:17:37.074 } 00:17:37.074 Got JSON-RPC error response 00:17:37.074 response: 00:17:37.074 { 00:17:37.074 "code": -17, 00:17:37.074 "message": "File exists" 00:17:37.074 } 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.074 request: 00:17:37.074 { 00:17:37.074 "name": "nvme_second", 00:17:37.074 "trtype": "tcp", 00:17:37.074 "traddr": "10.0.0.2", 00:17:37.074 "adrfam": "ipv4", 00:17:37.074 "trsvcid": "8009", 00:17:37.074 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:37.074 "wait_for_attach": true, 00:17:37.074 "method": "bdev_nvme_start_discovery", 00:17:37.074 "req_id": 1 00:17:37.074 } 00:17:37.074 Got JSON-RPC error response 00:17:37.074 response: 00:17:37.074 { 00:17:37.074 "code": -17, 00:17:37.074 "message": "File exists" 00:17:37.074 } 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:17:37.074 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:17:37.075 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:37.333 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:37.333 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.333 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:37.333 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.333 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:37.333 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.333 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.270 [2024-07-25 14:05:47.389669] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.270 [2024-07-25 14:05:47.389739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2284f70 with addr=10.0.0.2, port=8010 00:17:38.270 [2024-07-25 14:05:47.389759] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:38.270 [2024-07-25 14:05:47.389767] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:38.270 [2024-07-25 14:05:47.389774] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:39.207 [2024-07-25 14:05:48.387725] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:39.207 [2024-07-25 14:05:48.387793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2284f70 with addr=10.0.0.2, port=8010 00:17:39.207 [2024-07-25 14:05:48.387811] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:39.207 [2024-07-25 14:05:48.387817] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:39.207 [2024-07-25 14:05:48.387823] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:40.148 [2024-07-25 14:05:49.385670] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:40.148 request: 00:17:40.148 { 00:17:40.148 "name": "nvme_second", 00:17:40.148 "trtype": "tcp", 00:17:40.148 "traddr": "10.0.0.2", 00:17:40.148 "adrfam": "ipv4", 00:17:40.148 "trsvcid": "8010", 00:17:40.148 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:40.148 "wait_for_attach": false, 00:17:40.148 "attach_timeout_ms": 3000, 00:17:40.148 "method": "bdev_nvme_start_discovery", 00:17:40.148 "req_id": 1 00:17:40.148 } 00:17:40.148 Got JSON-RPC error response 00:17:40.148 response: 00:17:40.148 { 00:17:40.148 "code": -110, 00:17:40.148 "message": "Connection timed out" 00:17:40.148 } 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:40.148 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75519 00:17:40.407 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:40.407 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:40.408 rmmod nvme_tcp 00:17:40.408 rmmod nvme_fabrics 00:17:40.408 rmmod nvme_keyring 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 75487 ']' 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 75487 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 75487 ']' 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 75487 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75487 00:17:40.408 killing process with pid 75487 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75487' 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 75487 00:17:40.408 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 75487 00:17:40.667 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:40.667 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:40.668 00:17:40.668 real 0m9.737s 00:17:40.668 user 0m18.421s 00:17:40.668 sys 0m2.074s 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.668 ************************************ 00:17:40.668 END TEST nvmf_host_discovery 00:17:40.668 ************************************ 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.668 ************************************ 00:17:40.668 START TEST nvmf_host_multipath_status 00:17:40.668 ************************************ 00:17:40.668 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:40.928 * Looking for test storage... 00:17:40.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.928 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:40.929 Cannot find device "nvmf_tgt_br" 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.929 Cannot find device "nvmf_tgt_br2" 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:40.929 Cannot find device "nvmf_tgt_br" 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:40.929 Cannot find device "nvmf_tgt_br2" 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:40.929 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:41.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:17:41.190 00:17:41.190 --- 10.0.0.2 ping statistics --- 00:17:41.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.190 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:41.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:41.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:17:41.190 00:17:41.190 --- 10.0.0.3 ping statistics --- 00:17:41.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.190 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:41.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:41.190 00:17:41.190 --- 10.0.0.1 ping statistics --- 00:17:41.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.190 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=75965 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 75965 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 75965 ']' 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.190 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:41.449 [2024-07-25 14:05:50.513360] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:17:41.449 [2024-07-25 14:05:50.513440] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.449 [2024-07-25 14:05:50.652398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:41.708 [2024-07-25 14:05:50.768129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.708 [2024-07-25 14:05:50.768273] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.708 [2024-07-25 14:05:50.768345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.708 [2024-07-25 14:05:50.768393] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.708 [2024-07-25 14:05:50.768412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.708 [2024-07-25 14:05:50.768556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.708 [2024-07-25 14:05:50.768560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.708 [2024-07-25 14:05:50.812499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:42.276 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.276 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:17:42.276 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.276 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:42.276 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:42.276 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.276 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=75965 00:17:42.276 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:42.535 [2024-07-25 14:05:51.669054] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.535 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:42.794 Malloc0 00:17:42.794 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:43.052 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.310 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.310 [2024-07-25 14:05:52.610214] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.568 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:43.568 [2024-07-25 14:05:52.833893] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:43.568 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:43.568 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76021 00:17:43.568 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.568 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76021 /var/tmp/bdevperf.sock 00:17:43.568 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76021 ']' 00:17:43.568 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.568 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:43.568 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.568 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:43.568 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:44.955 14:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:44.955 14:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:17:44.955 14:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:44.955 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:45.213 Nvme0n1 00:17:45.213 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:45.471 Nvme0n1 00:17:45.471 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:45.471 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:47.371 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:47.371 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:47.630 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:47.896 14:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:48.831 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:48.831 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:48.831 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.831 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:49.089 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:49.089 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:49.089 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:49.089 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.348 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:49.348 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:49.348 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.348 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:49.607 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:49.607 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:49.607 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:49.607 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.866 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:49.866 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:49.866 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.866 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:50.125 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.125 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:50.125 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.125 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:50.383 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.383 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:50.383 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:50.642 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:50.901 14:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:51.838 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:51.838 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:51.838 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.838 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:52.099 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:52.099 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:52.099 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.099 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:52.357 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.357 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:52.357 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.357 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:52.615 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.615 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:52.615 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.615 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:52.615 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.615 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:52.615 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.615 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:52.874 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.874 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:52.874 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.874 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:53.132 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:53.132 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:53.132 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:53.391 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:53.650 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:54.586 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:54.586 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:54.586 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.586 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:54.844 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.844 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:54.845 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.845 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:55.102 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:55.102 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:55.102 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:55.102 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.360 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.360 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:55.360 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.360 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:55.622 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.622 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:55.622 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.622 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:55.883 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.883 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:55.883 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:56.142 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:56.142 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:56.142 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:56.401 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:56.401 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:57.778 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:57.778 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:57.778 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.778 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:57.778 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.778 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:57.778 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.778 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:58.038 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:58.038 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:58.038 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.038 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:58.298 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.298 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:58.298 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.298 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:58.298 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.298 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:58.298 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.298 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:58.557 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.557 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:58.557 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.557 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:58.817 14:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:58.817 14:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:58.817 14:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:59.076 14:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:59.335 14:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:00.274 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:00.274 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:00.274 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.274 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:00.534 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:00.534 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:00.534 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:00.534 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.793 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:00.793 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:00.793 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:00.793 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.050 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.050 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:01.050 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.050 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:01.309 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.309 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:01.309 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:01.309 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.309 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:01.309 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:01.309 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.309 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:01.568 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:01.568 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:01.568 14:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:02.133 14:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:02.133 14:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:03.068 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:03.068 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:03.326 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.326 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:03.326 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:03.326 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:03.326 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.326 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:03.596 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.596 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:03.596 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.596 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:03.861 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.861 14:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:03.861 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.861 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:04.120 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.120 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:04.120 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.120 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:04.379 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:04.379 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:04.379 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.379 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:04.379 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.379 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:04.638 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:04.638 14:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:04.897 14:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:05.157 14:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:06.095 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:06.095 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:06.095 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:06.095 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.353 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.353 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:06.353 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.353 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:06.612 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.612 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:06.612 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.612 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:06.871 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.872 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:06.872 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.872 14:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:06.872 14:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.872 14:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:06.872 14:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.872 14:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:07.133 14:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:07.133 14:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:07.133 14:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.133 14:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:07.392 14:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:07.392 14:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:07.392 14:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:07.651 14:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:07.909 14:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:08.958 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:08.958 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:08.958 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:08.958 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.958 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:08.958 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:08.958 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.958 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:09.216 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.216 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:09.216 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.216 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:09.473 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.473 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:09.473 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.473 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:09.731 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.731 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:09.731 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.731 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:09.731 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.731 14:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:09.731 14:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.731 14:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:09.989 14:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.989 14:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:09.989 14:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:10.247 14:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:10.505 14:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:11.439 14:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:11.439 14:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:11.439 14:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:11.439 14:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:11.697 14:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:11.697 14:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:11.697 14:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:11.697 14:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:11.955 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:11.955 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:11.955 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:11.955 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.214 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.214 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:12.214 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.214 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:12.472 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.472 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:12.472 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.472 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:12.730 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.730 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:12.730 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:12.730 14:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.989 14:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.989 14:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:12.989 14:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:13.249 14:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:13.511 14:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:14.462 14:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:14.462 14:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:14.462 14:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:14.462 14:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:14.720 14:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:14.720 14:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:14.720 14:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:14.720 14:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:14.979 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:14.979 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:14.979 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:14.979 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:15.237 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:15.237 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:15.237 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:15.237 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.237 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:15.237 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:15.237 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.237 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:15.496 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:15.496 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:15.496 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:15.496 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.755 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:15.755 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76021 00:18:15.755 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76021 ']' 00:18:15.755 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76021 00:18:15.755 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:18:15.755 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:15.755 14:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76021 00:18:15.755 killing process with pid 76021 00:18:15.755 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:15.755 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:15.755 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76021' 00:18:15.755 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76021 00:18:15.755 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76021 00:18:16.018 Connection closed with partial response: 00:18:16.018 00:18:16.018 00:18:16.018 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76021 00:18:16.018 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:16.018 [2024-07-25 14:05:52.886935] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:18:16.018 [2024-07-25 14:05:52.887023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76021 ] 00:18:16.018 [2024-07-25 14:05:53.027945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.018 [2024-07-25 14:05:53.180846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.018 [2024-07-25 14:05:53.257074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:16.018 Running I/O for 90 seconds... 00:18:16.018 [2024-07-25 14:06:08.259930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.018 [2024-07-25 14:06:08.260040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.018 [2024-07-25 14:06:08.260113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.018 [2024-07-25 14:06:08.260141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.018 [2024-07-25 14:06:08.260167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.018 [2024-07-25 14:06:08.260192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.018 [2024-07-25 14:06:08.260219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.018 [2024-07-25 14:06:08.260247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.018 [2024-07-25 14:06:08.260272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.018 [2024-07-25 14:06:08.260310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.018 [2024-07-25 14:06:08.260338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.018 [2024-07-25 14:06:08.260392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.018 [2024-07-25 14:06:08.260417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.018 [2024-07-25 14:06:08.260443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.018 [2024-07-25 14:06:08.260470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.018 [2024-07-25 14:06:08.260498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.018 [2024-07-25 14:06:08.260526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.018 [2024-07-25 14:06:08.260871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.018 [2024-07-25 14:06:08.260903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.018 [2024-07-25 14:06:08.260932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:16.018 [2024-07-25 14:06:08.260951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.018 [2024-07-25 14:06:08.260961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.260979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.019 [2024-07-25 14:06:08.260989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.019 [2024-07-25 14:06:08.261016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.019 [2024-07-25 14:06:08.261058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.019 [2024-07-25 14:06:08.261086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.019 [2024-07-25 14:06:08.261115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.019 [2024-07-25 14:06:08.261145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.019 [2024-07-25 14:06:08.261175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.019 [2024-07-25 14:06:08.261204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.019 [2024-07-25 14:06:08.261233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.019 [2024-07-25 14:06:08.261261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.019 [2024-07-25 14:06:08.261761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.019 [2024-07-25 14:06:08.261790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:16.019 [2024-07-25 14:06:08.261814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.019 [2024-07-25 14:06:08.261825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.261854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.020 [2024-07-25 14:06:08.261866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.261884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.020 [2024-07-25 14:06:08.261894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.261912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.020 [2024-07-25 14:06:08.261922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.261940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.020 [2024-07-25 14:06:08.261950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.261969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.020 [2024-07-25 14:06:08.261979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.261998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.020 [2024-07-25 14:06:08.262009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.020 [2024-07-25 14:06:08.262037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.020 [2024-07-25 14:06:08.262064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.020 [2024-07-25 14:06:08.262582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.020 [2024-07-25 14:06:08.262826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.020 [2024-07-25 14:06:08.262856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.020 [2024-07-25 14:06:08.262884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.020 [2024-07-25 14:06:08.262931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:16.020 [2024-07-25 14:06:08.262956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.262968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.263009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.263038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.263066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.263095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.263125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.263154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.263189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.263218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.263247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.263277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.263305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.021 [2024-07-25 14:06:08.263762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:16.021 [2024-07-25 14:06:08.263798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.021 [2024-07-25 14:06:08.263808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.263826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.022 [2024-07-25 14:06:08.263835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.263854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.022 [2024-07-25 14:06:08.263862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.263881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.022 [2024-07-25 14:06:08.263889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.263913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.022 [2024-07-25 14:06:08.263922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.263940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.022 [2024-07-25 14:06:08.263949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.263967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.022 [2024-07-25 14:06:08.263976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.263994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.022 [2024-07-25 14:06:08.264003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:08.264455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:08.264464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:22.573682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:22.573760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:22.573809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.022 [2024-07-25 14:06:22.573821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:22.573840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.022 [2024-07-25 14:06:22.573851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:22.573868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.022 [2024-07-25 14:06:22.573878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:22.573895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.022 [2024-07-25 14:06:22.573932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:16.022 [2024-07-25 14:06:22.573949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.573959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.573976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.573986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.574003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.574013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.574030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.574040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.574057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.574068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.574085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.574094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.574111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.574121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.574138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.574149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.574171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.574183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.574200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.574209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.574227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.574237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.574254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.574264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.575339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.575372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.575401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.575429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.575457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.575484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.575511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.575539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.575566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.575593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.575620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.575648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.023 [2024-07-25 14:06:22.575686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.575713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.575740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.575767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:16.023 [2024-07-25 14:06:22.575785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.023 [2024-07-25 14:06:22.575795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.575812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.024 [2024-07-25 14:06:22.575822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.575839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.024 [2024-07-25 14:06:22.575850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.575867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.024 [2024-07-25 14:06:22.575878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.575896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.024 [2024-07-25 14:06:22.575906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.575937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.024 [2024-07-25 14:06:22.575948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.575965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.024 [2024-07-25 14:06:22.575976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.575993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.024 [2024-07-25 14:06:22.576003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.024 [2024-07-25 14:06:22.576038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.024 [2024-07-25 14:06:22.576066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.024 [2024-07-25 14:06:22.576093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.024 [2024-07-25 14:06:22.576120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.024 [2024-07-25 14:06:22.576148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.024 [2024-07-25 14:06:22.576175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.024 [2024-07-25 14:06:22.576202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.024 [2024-07-25 14:06:22.576229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.024 [2024-07-25 14:06:22.576257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.024 [2024-07-25 14:06:22.576285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.024 [2024-07-25 14:06:22.576324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.024 [2024-07-25 14:06:22.576353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.024 [2024-07-25 14:06:22.576387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:16.024 [2024-07-25 14:06:22.576410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.024 [2024-07-25 14:06:22.576421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:16.025 [2024-07-25 14:06:22.576439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.025 [2024-07-25 14:06:22.576449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:16.025 [2024-07-25 14:06:22.576465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.025 [2024-07-25 14:06:22.576476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:16.025 [2024-07-25 14:06:22.576493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.025 [2024-07-25 14:06:22.576504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:16.025 [2024-07-25 14:06:22.576520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.025 [2024-07-25 14:06:22.576531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:16.025 [2024-07-25 14:06:22.576548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.025 [2024-07-25 14:06:22.576558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:16.025 [2024-07-25 14:06:22.576576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.025 [2024-07-25 14:06:22.576586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:16.025 Received shutdown signal, test time was about 30.367010 seconds 00:18:16.025 00:18:16.025 Latency(us) 00:18:16.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.025 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:16.025 Verification LBA range: start 0x0 length 0x4000 00:18:16.025 Nvme0n1 : 30.37 9318.31 36.40 0.00 0.00 13705.68 89.88 4014809.77 00:18:16.025 =================================================================================================================== 00:18:16.025 Total : 9318.31 36.40 0.00 0.00 13705.68 89.88 4014809.77 00:18:16.025 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.283 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:16.283 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:16.283 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:16.283 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:16.283 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:18:16.283 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.283 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.284 rmmod nvme_tcp 00:18:16.284 rmmod nvme_fabrics 00:18:16.284 rmmod nvme_keyring 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 75965 ']' 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 75965 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 75965 ']' 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 75965 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75965 00:18:16.284 killing process with pid 75965 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75965' 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 75965 00:18:16.284 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 75965 00:18:16.541 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:16.541 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:16.541 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:16.541 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.541 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.541 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.541 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.541 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.542 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:16.542 ************************************ 00:18:16.542 END TEST nvmf_host_multipath_status 00:18:16.542 ************************************ 00:18:16.542 00:18:16.542 real 0m35.919s 00:18:16.542 user 1m54.677s 00:18:16.542 sys 0m10.408s 00:18:16.542 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:16.542 14:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:16.803 14:06:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:16.803 14:06:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:16.803 14:06:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:16.803 14:06:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.803 ************************************ 00:18:16.803 START TEST nvmf_discovery_remove_ifc 00:18:16.803 ************************************ 00:18:16.803 14:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:16.803 * Looking for test storage... 00:18:16.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:16.803 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:16.804 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:17.138 Cannot find device "nvmf_tgt_br" 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:17.138 Cannot find device "nvmf_tgt_br2" 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:17.138 Cannot find device "nvmf_tgt_br" 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:17.138 Cannot find device "nvmf_tgt_br2" 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:17.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:17.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:17.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:18:17.138 00:18:17.138 --- 10.0.0.2 ping statistics --- 00:18:17.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.138 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:17.138 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:17.138 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:18:17.138 00:18:17.138 --- 10.0.0.3 ping statistics --- 00:18:17.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.138 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:17.138 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:17.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:17.138 00:18:17.138 --- 10.0.0.1 ping statistics --- 00:18:17.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.138 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:17.139 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.139 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:18:17.139 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.139 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.139 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.139 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.139 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.139 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.139 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.397 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:17.397 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.397 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:17.398 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:17.398 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=76768 00:18:17.398 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 76768 00:18:17.398 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76768 ']' 00:18:17.398 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.398 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.398 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.398 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.398 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:17.398 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:17.398 [2024-07-25 14:06:26.530020] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:18:17.398 [2024-07-25 14:06:26.530263] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.398 [2024-07-25 14:06:26.669181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.656 [2024-07-25 14:06:26.763760] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.656 [2024-07-25 14:06:26.763911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.656 [2024-07-25 14:06:26.763961] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.656 [2024-07-25 14:06:26.763987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.656 [2024-07-25 14:06:26.764003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.656 [2024-07-25 14:06:26.764040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.656 [2024-07-25 14:06:26.806286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:18.224 [2024-07-25 14:06:27.441005] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.224 [2024-07-25 14:06:27.449111] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:18.224 null0 00:18:18.224 [2024-07-25 14:06:27.480973] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76803 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76803 /tmp/host.sock 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76803 ']' 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:18.224 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.224 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:18.483 [2024-07-25 14:06:27.556861] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:18:18.483 [2024-07-25 14:06:27.557014] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76803 ] 00:18:18.483 [2024-07-25 14:06:27.693094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.740 [2024-07-25 14:06:27.796531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:19.307 [2024-07-25 14:06:28.509582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.307 14:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:20.681 [2024-07-25 14:06:29.559679] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:20.681 [2024-07-25 14:06:29.559714] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:20.681 [2024-07-25 14:06:29.559730] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:20.681 [2024-07-25 14:06:29.565703] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:20.681 [2024-07-25 14:06:29.622261] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:20.681 [2024-07-25 14:06:29.622343] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:20.681 [2024-07-25 14:06:29.622366] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:20.682 [2024-07-25 14:06:29.622382] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:20.682 [2024-07-25 14:06:29.622405] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:20.682 [2024-07-25 14:06:29.628132] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1cc7ef0 was disconnected and freed. delete nvme_qpair. 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:20.682 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:21.621 14:06:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:21.621 14:06:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:21.622 14:06:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:21.622 14:06:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.622 14:06:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:21.622 14:06:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:21.622 14:06:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:21.622 14:06:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.622 14:06:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:21.622 14:06:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:22.553 14:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:22.553 14:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:22.553 14:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:22.553 14:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:22.553 14:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.553 14:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:22.553 14:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:22.811 14:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.811 14:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:22.811 14:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:23.749 14:06:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:23.749 14:06:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:23.749 14:06:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.749 14:06:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:23.749 14:06:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:23.749 14:06:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:23.749 14:06:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:23.749 14:06:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.749 14:06:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:23.749 14:06:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:24.683 14:06:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:24.683 14:06:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:24.683 14:06:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:24.683 14:06:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.683 14:06:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:24.683 14:06:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:24.683 14:06:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:24.683 14:06:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.942 14:06:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:24.942 14:06:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:25.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:25.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:25.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:25.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:25.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:25.879 14:06:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.879 14:06:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:25.879 14:06:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:25.879 [2024-07-25 14:06:35.049839] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:25.879 [2024-07-25 14:06:35.049915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.879 [2024-07-25 14:06:35.049926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.879 [2024-07-25 14:06:35.049934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.879 [2024-07-25 14:06:35.049940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.879 [2024-07-25 14:06:35.049946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.879 [2024-07-25 14:06:35.049952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.879 [2024-07-25 14:06:35.049957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.879 [2024-07-25 14:06:35.049962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.880 [2024-07-25 14:06:35.049969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.880 [2024-07-25 14:06:35.049975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.880 [2024-07-25 14:06:35.049980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2dac0 is same with the state(5) to be set 00:18:25.880 [2024-07-25 14:06:35.059813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2dac0 (9): Bad file descriptor 00:18:25.880 [2024-07-25 14:06:35.069818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:26.815 14:06:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:26.815 14:06:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.815 14:06:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:26.815 14:06:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.815 14:06:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:26.815 14:06:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:26.815 14:06:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:26.815 [2024-07-25 14:06:36.102388] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:18:26.815 [2024-07-25 14:06:36.102714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2dac0 with addr=10.0.0.2, port=4420 00:18:26.815 [2024-07-25 14:06:36.102910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2dac0 is same with the state(5) to be set 00:18:26.815 [2024-07-25 14:06:36.103129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2dac0 (9): Bad file descriptor 00:18:26.815 [2024-07-25 14:06:36.104387] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:26.815 [2024-07-25 14:06:36.104476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:26.815 [2024-07-25 14:06:36.104500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:26.815 [2024-07-25 14:06:36.104523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:26.815 [2024-07-25 14:06:36.104597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:26.815 [2024-07-25 14:06:36.104625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:27.074 14:06:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.074 14:06:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:27.074 14:06:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:28.011 [2024-07-25 14:06:37.102764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.011 [2024-07-25 14:06:37.102826] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.011 [2024-07-25 14:06:37.102832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.011 [2024-07-25 14:06:37.102840] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:18:28.011 [2024-07-25 14:06:37.102859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.011 [2024-07-25 14:06:37.102884] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:28.011 [2024-07-25 14:06:37.102932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.011 [2024-07-25 14:06:37.102942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.011 [2024-07-25 14:06:37.102952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.011 [2024-07-25 14:06:37.102958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.011 [2024-07-25 14:06:37.102966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.011 [2024-07-25 14:06:37.102972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.011 [2024-07-25 14:06:37.102979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.011 [2024-07-25 14:06:37.102984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.011 [2024-07-25 14:06:37.102990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.011 [2024-07-25 14:06:37.102996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.011 [2024-07-25 14:06:37.103002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:28.011 [2024-07-25 14:06:37.103543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c31860 (9): Bad file descriptor 00:18:28.011 [2024-07-25 14:06:37.104547] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:28.011 [2024-07-25 14:06:37.104566] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:28.011 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:28.012 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.012 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:28.012 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.012 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:28.012 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:28.012 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.012 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:28.012 14:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:29.389 14:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:29.389 14:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:29.389 14:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.389 14:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:29.389 14:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:29.389 14:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:29.389 14:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:29.389 14:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.389 14:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:29.389 14:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:29.956 [2024-07-25 14:06:39.104589] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:29.956 [2024-07-25 14:06:39.104627] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:29.956 [2024-07-25 14:06:39.104642] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:29.956 [2024-07-25 14:06:39.110610] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:29.956 [2024-07-25 14:06:39.166542] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:29.956 [2024-07-25 14:06:39.166687] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:29.957 [2024-07-25 14:06:39.166723] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:29.957 [2024-07-25 14:06:39.166757] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:29.957 [2024-07-25 14:06:39.166786] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:29.957 [2024-07-25 14:06:39.173401] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ca5460 was disconnected and freed. delete nvme_qpair. 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76803 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76803 ']' 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76803 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76803 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76803' 00:18:30.215 killing process with pid 76803 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76803 00:18:30.215 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76803 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:30.475 rmmod nvme_tcp 00:18:30.475 rmmod nvme_fabrics 00:18:30.475 rmmod nvme_keyring 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 76768 ']' 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 76768 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76768 ']' 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76768 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76768 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76768' 00:18:30.475 killing process with pid 76768 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76768 00:18:30.475 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76768 00:18:30.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:30.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:30.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:30.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:30.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:30.734 00:18:30.734 real 0m14.093s 00:18:30.734 user 0m24.339s 00:18:30.734 sys 0m2.369s 00:18:30.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:30.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:30.734 ************************************ 00:18:30.734 END TEST nvmf_discovery_remove_ifc 00:18:30.734 ************************************ 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.999 ************************************ 00:18:30.999 START TEST nvmf_identify_kernel_target 00:18:30.999 ************************************ 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:30.999 * Looking for test storage... 00:18:30.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.999 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:31.000 Cannot find device "nvmf_tgt_br" 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:31.000 Cannot find device "nvmf_tgt_br2" 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:31.000 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:31.273 Cannot find device "nvmf_tgt_br" 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:31.273 Cannot find device "nvmf_tgt_br2" 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:31.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:31.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:31.273 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:31.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:18:31.274 00:18:31.274 --- 10.0.0.2 ping statistics --- 00:18:31.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.274 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:31.274 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:31.274 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:18:31.274 00:18:31.274 --- 10.0.0.3 ping statistics --- 00:18:31.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.274 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:31.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:31.274 00:18:31.274 --- 10.0.0.1 ping statistics --- 00:18:31.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.274 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:31.274 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:31.533 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:31.533 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:31.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:31.792 Waiting for block devices as requested 00:18:32.051 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:32.051 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:32.051 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:32.051 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:32.052 No valid GPT data, bailing 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:32.052 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:32.311 No valid GPT data, bailing 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:32.311 No valid GPT data, bailing 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:32.311 No valid GPT data, bailing 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:32.311 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid=ae1cc223-8955-4554-9c53-a88c4ce7ab12 -a 10.0.0.1 -t tcp -s 4420 00:18:32.571 00:18:32.571 Discovery Log Number of Records 2, Generation counter 2 00:18:32.571 =====Discovery Log Entry 0====== 00:18:32.571 trtype: tcp 00:18:32.571 adrfam: ipv4 00:18:32.571 subtype: current discovery subsystem 00:18:32.571 treq: not specified, sq flow control disable supported 00:18:32.571 portid: 1 00:18:32.571 trsvcid: 4420 00:18:32.571 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:32.571 traddr: 10.0.0.1 00:18:32.571 eflags: none 00:18:32.571 sectype: none 00:18:32.571 =====Discovery Log Entry 1====== 00:18:32.571 trtype: tcp 00:18:32.571 adrfam: ipv4 00:18:32.571 subtype: nvme subsystem 00:18:32.571 treq: not specified, sq flow control disable supported 00:18:32.571 portid: 1 00:18:32.571 trsvcid: 4420 00:18:32.571 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:32.571 traddr: 10.0.0.1 00:18:32.571 eflags: none 00:18:32.571 sectype: none 00:18:32.571 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:32.571 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:32.571 ===================================================== 00:18:32.571 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:32.571 ===================================================== 00:18:32.571 Controller Capabilities/Features 00:18:32.571 ================================ 00:18:32.571 Vendor ID: 0000 00:18:32.571 Subsystem Vendor ID: 0000 00:18:32.571 Serial Number: ae1210e03bbbc997aff2 00:18:32.571 Model Number: Linux 00:18:32.571 Firmware Version: 6.7.0-68 00:18:32.571 Recommended Arb Burst: 0 00:18:32.571 IEEE OUI Identifier: 00 00 00 00:18:32.571 Multi-path I/O 00:18:32.571 May have multiple subsystem ports: No 00:18:32.571 May have multiple controllers: No 00:18:32.571 Associated with SR-IOV VF: No 00:18:32.571 Max Data Transfer Size: Unlimited 00:18:32.571 Max Number of Namespaces: 0 00:18:32.571 Max Number of I/O Queues: 1024 00:18:32.571 NVMe Specification Version (VS): 1.3 00:18:32.571 NVMe Specification Version (Identify): 1.3 00:18:32.571 Maximum Queue Entries: 1024 00:18:32.571 Contiguous Queues Required: No 00:18:32.571 Arbitration Mechanisms Supported 00:18:32.571 Weighted Round Robin: Not Supported 00:18:32.571 Vendor Specific: Not Supported 00:18:32.571 Reset Timeout: 7500 ms 00:18:32.571 Doorbell Stride: 4 bytes 00:18:32.571 NVM Subsystem Reset: Not Supported 00:18:32.571 Command Sets Supported 00:18:32.571 NVM Command Set: Supported 00:18:32.571 Boot Partition: Not Supported 00:18:32.571 Memory Page Size Minimum: 4096 bytes 00:18:32.571 Memory Page Size Maximum: 4096 bytes 00:18:32.571 Persistent Memory Region: Not Supported 00:18:32.571 Optional Asynchronous Events Supported 00:18:32.571 Namespace Attribute Notices: Not Supported 00:18:32.571 Firmware Activation Notices: Not Supported 00:18:32.571 ANA Change Notices: Not Supported 00:18:32.571 PLE Aggregate Log Change Notices: Not Supported 00:18:32.571 LBA Status Info Alert Notices: Not Supported 00:18:32.571 EGE Aggregate Log Change Notices: Not Supported 00:18:32.571 Normal NVM Subsystem Shutdown event: Not Supported 00:18:32.571 Zone Descriptor Change Notices: Not Supported 00:18:32.571 Discovery Log Change Notices: Supported 00:18:32.571 Controller Attributes 00:18:32.571 128-bit Host Identifier: Not Supported 00:18:32.571 Non-Operational Permissive Mode: Not Supported 00:18:32.571 NVM Sets: Not Supported 00:18:32.571 Read Recovery Levels: Not Supported 00:18:32.571 Endurance Groups: Not Supported 00:18:32.571 Predictable Latency Mode: Not Supported 00:18:32.571 Traffic Based Keep ALive: Not Supported 00:18:32.571 Namespace Granularity: Not Supported 00:18:32.571 SQ Associations: Not Supported 00:18:32.571 UUID List: Not Supported 00:18:32.571 Multi-Domain Subsystem: Not Supported 00:18:32.571 Fixed Capacity Management: Not Supported 00:18:32.571 Variable Capacity Management: Not Supported 00:18:32.571 Delete Endurance Group: Not Supported 00:18:32.571 Delete NVM Set: Not Supported 00:18:32.571 Extended LBA Formats Supported: Not Supported 00:18:32.571 Flexible Data Placement Supported: Not Supported 00:18:32.571 00:18:32.571 Controller Memory Buffer Support 00:18:32.571 ================================ 00:18:32.571 Supported: No 00:18:32.571 00:18:32.571 Persistent Memory Region Support 00:18:32.571 ================================ 00:18:32.571 Supported: No 00:18:32.571 00:18:32.571 Admin Command Set Attributes 00:18:32.571 ============================ 00:18:32.571 Security Send/Receive: Not Supported 00:18:32.571 Format NVM: Not Supported 00:18:32.571 Firmware Activate/Download: Not Supported 00:18:32.571 Namespace Management: Not Supported 00:18:32.571 Device Self-Test: Not Supported 00:18:32.571 Directives: Not Supported 00:18:32.571 NVMe-MI: Not Supported 00:18:32.571 Virtualization Management: Not Supported 00:18:32.571 Doorbell Buffer Config: Not Supported 00:18:32.571 Get LBA Status Capability: Not Supported 00:18:32.571 Command & Feature Lockdown Capability: Not Supported 00:18:32.571 Abort Command Limit: 1 00:18:32.571 Async Event Request Limit: 1 00:18:32.571 Number of Firmware Slots: N/A 00:18:32.571 Firmware Slot 1 Read-Only: N/A 00:18:32.571 Firmware Activation Without Reset: N/A 00:18:32.571 Multiple Update Detection Support: N/A 00:18:32.571 Firmware Update Granularity: No Information Provided 00:18:32.571 Per-Namespace SMART Log: No 00:18:32.571 Asymmetric Namespace Access Log Page: Not Supported 00:18:32.571 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:32.571 Command Effects Log Page: Not Supported 00:18:32.571 Get Log Page Extended Data: Supported 00:18:32.571 Telemetry Log Pages: Not Supported 00:18:32.571 Persistent Event Log Pages: Not Supported 00:18:32.571 Supported Log Pages Log Page: May Support 00:18:32.571 Commands Supported & Effects Log Page: Not Supported 00:18:32.571 Feature Identifiers & Effects Log Page:May Support 00:18:32.571 NVMe-MI Commands & Effects Log Page: May Support 00:18:32.571 Data Area 4 for Telemetry Log: Not Supported 00:18:32.571 Error Log Page Entries Supported: 1 00:18:32.571 Keep Alive: Not Supported 00:18:32.571 00:18:32.571 NVM Command Set Attributes 00:18:32.571 ========================== 00:18:32.571 Submission Queue Entry Size 00:18:32.571 Max: 1 00:18:32.571 Min: 1 00:18:32.571 Completion Queue Entry Size 00:18:32.571 Max: 1 00:18:32.571 Min: 1 00:18:32.571 Number of Namespaces: 0 00:18:32.571 Compare Command: Not Supported 00:18:32.571 Write Uncorrectable Command: Not Supported 00:18:32.571 Dataset Management Command: Not Supported 00:18:32.571 Write Zeroes Command: Not Supported 00:18:32.571 Set Features Save Field: Not Supported 00:18:32.571 Reservations: Not Supported 00:18:32.571 Timestamp: Not Supported 00:18:32.571 Copy: Not Supported 00:18:32.571 Volatile Write Cache: Not Present 00:18:32.571 Atomic Write Unit (Normal): 1 00:18:32.571 Atomic Write Unit (PFail): 1 00:18:32.571 Atomic Compare & Write Unit: 1 00:18:32.571 Fused Compare & Write: Not Supported 00:18:32.571 Scatter-Gather List 00:18:32.571 SGL Command Set: Supported 00:18:32.571 SGL Keyed: Not Supported 00:18:32.571 SGL Bit Bucket Descriptor: Not Supported 00:18:32.571 SGL Metadata Pointer: Not Supported 00:18:32.571 Oversized SGL: Not Supported 00:18:32.571 SGL Metadata Address: Not Supported 00:18:32.571 SGL Offset: Supported 00:18:32.571 Transport SGL Data Block: Not Supported 00:18:32.572 Replay Protected Memory Block: Not Supported 00:18:32.572 00:18:32.572 Firmware Slot Information 00:18:32.572 ========================= 00:18:32.572 Active slot: 0 00:18:32.572 00:18:32.572 00:18:32.572 Error Log 00:18:32.572 ========= 00:18:32.572 00:18:32.572 Active Namespaces 00:18:32.572 ================= 00:18:32.572 Discovery Log Page 00:18:32.572 ================== 00:18:32.572 Generation Counter: 2 00:18:32.572 Number of Records: 2 00:18:32.572 Record Format: 0 00:18:32.572 00:18:32.572 Discovery Log Entry 0 00:18:32.572 ---------------------- 00:18:32.572 Transport Type: 3 (TCP) 00:18:32.572 Address Family: 1 (IPv4) 00:18:32.572 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:32.572 Entry Flags: 00:18:32.572 Duplicate Returned Information: 0 00:18:32.572 Explicit Persistent Connection Support for Discovery: 0 00:18:32.572 Transport Requirements: 00:18:32.572 Secure Channel: Not Specified 00:18:32.572 Port ID: 1 (0x0001) 00:18:32.572 Controller ID: 65535 (0xffff) 00:18:32.572 Admin Max SQ Size: 32 00:18:32.572 Transport Service Identifier: 4420 00:18:32.572 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:32.572 Transport Address: 10.0.0.1 00:18:32.572 Discovery Log Entry 1 00:18:32.572 ---------------------- 00:18:32.572 Transport Type: 3 (TCP) 00:18:32.572 Address Family: 1 (IPv4) 00:18:32.572 Subsystem Type: 2 (NVM Subsystem) 00:18:32.572 Entry Flags: 00:18:32.572 Duplicate Returned Information: 0 00:18:32.572 Explicit Persistent Connection Support for Discovery: 0 00:18:32.572 Transport Requirements: 00:18:32.572 Secure Channel: Not Specified 00:18:32.572 Port ID: 1 (0x0001) 00:18:32.572 Controller ID: 65535 (0xffff) 00:18:32.572 Admin Max SQ Size: 32 00:18:32.572 Transport Service Identifier: 4420 00:18:32.572 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:32.572 Transport Address: 10.0.0.1 00:18:32.572 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:32.830 get_feature(0x01) failed 00:18:32.831 get_feature(0x02) failed 00:18:32.831 get_feature(0x04) failed 00:18:32.831 ===================================================== 00:18:32.831 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:32.831 ===================================================== 00:18:32.831 Controller Capabilities/Features 00:18:32.831 ================================ 00:18:32.831 Vendor ID: 0000 00:18:32.831 Subsystem Vendor ID: 0000 00:18:32.831 Serial Number: 80082897fa1a626b6837 00:18:32.831 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:32.831 Firmware Version: 6.7.0-68 00:18:32.831 Recommended Arb Burst: 6 00:18:32.831 IEEE OUI Identifier: 00 00 00 00:18:32.831 Multi-path I/O 00:18:32.831 May have multiple subsystem ports: Yes 00:18:32.831 May have multiple controllers: Yes 00:18:32.831 Associated with SR-IOV VF: No 00:18:32.831 Max Data Transfer Size: Unlimited 00:18:32.831 Max Number of Namespaces: 1024 00:18:32.831 Max Number of I/O Queues: 128 00:18:32.831 NVMe Specification Version (VS): 1.3 00:18:32.831 NVMe Specification Version (Identify): 1.3 00:18:32.831 Maximum Queue Entries: 1024 00:18:32.831 Contiguous Queues Required: No 00:18:32.831 Arbitration Mechanisms Supported 00:18:32.831 Weighted Round Robin: Not Supported 00:18:32.831 Vendor Specific: Not Supported 00:18:32.831 Reset Timeout: 7500 ms 00:18:32.831 Doorbell Stride: 4 bytes 00:18:32.831 NVM Subsystem Reset: Not Supported 00:18:32.831 Command Sets Supported 00:18:32.831 NVM Command Set: Supported 00:18:32.831 Boot Partition: Not Supported 00:18:32.831 Memory Page Size Minimum: 4096 bytes 00:18:32.831 Memory Page Size Maximum: 4096 bytes 00:18:32.831 Persistent Memory Region: Not Supported 00:18:32.831 Optional Asynchronous Events Supported 00:18:32.831 Namespace Attribute Notices: Supported 00:18:32.831 Firmware Activation Notices: Not Supported 00:18:32.831 ANA Change Notices: Supported 00:18:32.831 PLE Aggregate Log Change Notices: Not Supported 00:18:32.831 LBA Status Info Alert Notices: Not Supported 00:18:32.831 EGE Aggregate Log Change Notices: Not Supported 00:18:32.831 Normal NVM Subsystem Shutdown event: Not Supported 00:18:32.831 Zone Descriptor Change Notices: Not Supported 00:18:32.831 Discovery Log Change Notices: Not Supported 00:18:32.831 Controller Attributes 00:18:32.831 128-bit Host Identifier: Supported 00:18:32.831 Non-Operational Permissive Mode: Not Supported 00:18:32.831 NVM Sets: Not Supported 00:18:32.831 Read Recovery Levels: Not Supported 00:18:32.831 Endurance Groups: Not Supported 00:18:32.831 Predictable Latency Mode: Not Supported 00:18:32.831 Traffic Based Keep ALive: Supported 00:18:32.831 Namespace Granularity: Not Supported 00:18:32.831 SQ Associations: Not Supported 00:18:32.831 UUID List: Not Supported 00:18:32.831 Multi-Domain Subsystem: Not Supported 00:18:32.831 Fixed Capacity Management: Not Supported 00:18:32.831 Variable Capacity Management: Not Supported 00:18:32.831 Delete Endurance Group: Not Supported 00:18:32.831 Delete NVM Set: Not Supported 00:18:32.831 Extended LBA Formats Supported: Not Supported 00:18:32.831 Flexible Data Placement Supported: Not Supported 00:18:32.831 00:18:32.831 Controller Memory Buffer Support 00:18:32.831 ================================ 00:18:32.831 Supported: No 00:18:32.831 00:18:32.831 Persistent Memory Region Support 00:18:32.831 ================================ 00:18:32.831 Supported: No 00:18:32.831 00:18:32.831 Admin Command Set Attributes 00:18:32.831 ============================ 00:18:32.831 Security Send/Receive: Not Supported 00:18:32.831 Format NVM: Not Supported 00:18:32.831 Firmware Activate/Download: Not Supported 00:18:32.831 Namespace Management: Not Supported 00:18:32.831 Device Self-Test: Not Supported 00:18:32.831 Directives: Not Supported 00:18:32.831 NVMe-MI: Not Supported 00:18:32.831 Virtualization Management: Not Supported 00:18:32.831 Doorbell Buffer Config: Not Supported 00:18:32.831 Get LBA Status Capability: Not Supported 00:18:32.831 Command & Feature Lockdown Capability: Not Supported 00:18:32.831 Abort Command Limit: 4 00:18:32.831 Async Event Request Limit: 4 00:18:32.831 Number of Firmware Slots: N/A 00:18:32.831 Firmware Slot 1 Read-Only: N/A 00:18:32.831 Firmware Activation Without Reset: N/A 00:18:32.831 Multiple Update Detection Support: N/A 00:18:32.831 Firmware Update Granularity: No Information Provided 00:18:32.831 Per-Namespace SMART Log: Yes 00:18:32.831 Asymmetric Namespace Access Log Page: Supported 00:18:32.831 ANA Transition Time : 10 sec 00:18:32.831 00:18:32.831 Asymmetric Namespace Access Capabilities 00:18:32.831 ANA Optimized State : Supported 00:18:32.831 ANA Non-Optimized State : Supported 00:18:32.831 ANA Inaccessible State : Supported 00:18:32.831 ANA Persistent Loss State : Supported 00:18:32.831 ANA Change State : Supported 00:18:32.831 ANAGRPID is not changed : No 00:18:32.831 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:32.831 00:18:32.831 ANA Group Identifier Maximum : 128 00:18:32.831 Number of ANA Group Identifiers : 128 00:18:32.831 Max Number of Allowed Namespaces : 1024 00:18:32.831 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:32.831 Command Effects Log Page: Supported 00:18:32.831 Get Log Page Extended Data: Supported 00:18:32.831 Telemetry Log Pages: Not Supported 00:18:32.831 Persistent Event Log Pages: Not Supported 00:18:32.831 Supported Log Pages Log Page: May Support 00:18:32.831 Commands Supported & Effects Log Page: Not Supported 00:18:32.831 Feature Identifiers & Effects Log Page:May Support 00:18:32.831 NVMe-MI Commands & Effects Log Page: May Support 00:18:32.831 Data Area 4 for Telemetry Log: Not Supported 00:18:32.831 Error Log Page Entries Supported: 128 00:18:32.831 Keep Alive: Supported 00:18:32.831 Keep Alive Granularity: 1000 ms 00:18:32.831 00:18:32.831 NVM Command Set Attributes 00:18:32.831 ========================== 00:18:32.831 Submission Queue Entry Size 00:18:32.831 Max: 64 00:18:32.831 Min: 64 00:18:32.831 Completion Queue Entry Size 00:18:32.831 Max: 16 00:18:32.831 Min: 16 00:18:32.831 Number of Namespaces: 1024 00:18:32.831 Compare Command: Not Supported 00:18:32.831 Write Uncorrectable Command: Not Supported 00:18:32.831 Dataset Management Command: Supported 00:18:32.831 Write Zeroes Command: Supported 00:18:32.831 Set Features Save Field: Not Supported 00:18:32.831 Reservations: Not Supported 00:18:32.831 Timestamp: Not Supported 00:18:32.831 Copy: Not Supported 00:18:32.831 Volatile Write Cache: Present 00:18:32.831 Atomic Write Unit (Normal): 1 00:18:32.831 Atomic Write Unit (PFail): 1 00:18:32.831 Atomic Compare & Write Unit: 1 00:18:32.831 Fused Compare & Write: Not Supported 00:18:32.831 Scatter-Gather List 00:18:32.831 SGL Command Set: Supported 00:18:32.831 SGL Keyed: Not Supported 00:18:32.831 SGL Bit Bucket Descriptor: Not Supported 00:18:32.831 SGL Metadata Pointer: Not Supported 00:18:32.831 Oversized SGL: Not Supported 00:18:32.831 SGL Metadata Address: Not Supported 00:18:32.831 SGL Offset: Supported 00:18:32.831 Transport SGL Data Block: Not Supported 00:18:32.831 Replay Protected Memory Block: Not Supported 00:18:32.831 00:18:32.831 Firmware Slot Information 00:18:32.831 ========================= 00:18:32.831 Active slot: 0 00:18:32.831 00:18:32.831 Asymmetric Namespace Access 00:18:32.831 =========================== 00:18:32.831 Change Count : 0 00:18:32.831 Number of ANA Group Descriptors : 1 00:18:32.831 ANA Group Descriptor : 0 00:18:32.831 ANA Group ID : 1 00:18:32.831 Number of NSID Values : 1 00:18:32.831 Change Count : 0 00:18:32.831 ANA State : 1 00:18:32.831 Namespace Identifier : 1 00:18:32.831 00:18:32.831 Commands Supported and Effects 00:18:32.831 ============================== 00:18:32.831 Admin Commands 00:18:32.831 -------------- 00:18:32.831 Get Log Page (02h): Supported 00:18:32.831 Identify (06h): Supported 00:18:32.831 Abort (08h): Supported 00:18:32.831 Set Features (09h): Supported 00:18:32.831 Get Features (0Ah): Supported 00:18:32.831 Asynchronous Event Request (0Ch): Supported 00:18:32.831 Keep Alive (18h): Supported 00:18:32.831 I/O Commands 00:18:32.831 ------------ 00:18:32.831 Flush (00h): Supported 00:18:32.831 Write (01h): Supported LBA-Change 00:18:32.831 Read (02h): Supported 00:18:32.832 Write Zeroes (08h): Supported LBA-Change 00:18:32.832 Dataset Management (09h): Supported 00:18:32.832 00:18:32.832 Error Log 00:18:32.832 ========= 00:18:32.832 Entry: 0 00:18:32.832 Error Count: 0x3 00:18:32.832 Submission Queue Id: 0x0 00:18:32.832 Command Id: 0x5 00:18:32.832 Phase Bit: 0 00:18:32.832 Status Code: 0x2 00:18:32.832 Status Code Type: 0x0 00:18:32.832 Do Not Retry: 1 00:18:32.832 Error Location: 0x28 00:18:32.832 LBA: 0x0 00:18:32.832 Namespace: 0x0 00:18:32.832 Vendor Log Page: 0x0 00:18:32.832 ----------- 00:18:32.832 Entry: 1 00:18:32.832 Error Count: 0x2 00:18:32.832 Submission Queue Id: 0x0 00:18:32.832 Command Id: 0x5 00:18:32.832 Phase Bit: 0 00:18:32.832 Status Code: 0x2 00:18:32.832 Status Code Type: 0x0 00:18:32.832 Do Not Retry: 1 00:18:32.832 Error Location: 0x28 00:18:32.832 LBA: 0x0 00:18:32.832 Namespace: 0x0 00:18:32.832 Vendor Log Page: 0x0 00:18:32.832 ----------- 00:18:32.832 Entry: 2 00:18:32.832 Error Count: 0x1 00:18:32.832 Submission Queue Id: 0x0 00:18:32.832 Command Id: 0x4 00:18:32.832 Phase Bit: 0 00:18:32.832 Status Code: 0x2 00:18:32.832 Status Code Type: 0x0 00:18:32.832 Do Not Retry: 1 00:18:32.832 Error Location: 0x28 00:18:32.832 LBA: 0x0 00:18:32.832 Namespace: 0x0 00:18:32.832 Vendor Log Page: 0x0 00:18:32.832 00:18:32.832 Number of Queues 00:18:32.832 ================ 00:18:32.832 Number of I/O Submission Queues: 128 00:18:32.832 Number of I/O Completion Queues: 128 00:18:32.832 00:18:32.832 ZNS Specific Controller Data 00:18:32.832 ============================ 00:18:32.832 Zone Append Size Limit: 0 00:18:32.832 00:18:32.832 00:18:32.832 Active Namespaces 00:18:32.832 ================= 00:18:32.832 get_feature(0x05) failed 00:18:32.832 Namespace ID:1 00:18:32.832 Command Set Identifier: NVM (00h) 00:18:32.832 Deallocate: Supported 00:18:32.832 Deallocated/Unwritten Error: Not Supported 00:18:32.832 Deallocated Read Value: Unknown 00:18:32.832 Deallocate in Write Zeroes: Not Supported 00:18:32.832 Deallocated Guard Field: 0xFFFF 00:18:32.832 Flush: Supported 00:18:32.832 Reservation: Not Supported 00:18:32.832 Namespace Sharing Capabilities: Multiple Controllers 00:18:32.832 Size (in LBAs): 1310720 (5GiB) 00:18:32.832 Capacity (in LBAs): 1310720 (5GiB) 00:18:32.832 Utilization (in LBAs): 1310720 (5GiB) 00:18:32.832 UUID: a28eebbe-2e1c-4cd8-bbff-10ab7a10b759 00:18:32.832 Thin Provisioning: Not Supported 00:18:32.832 Per-NS Atomic Units: Yes 00:18:32.832 Atomic Boundary Size (Normal): 0 00:18:32.832 Atomic Boundary Size (PFail): 0 00:18:32.832 Atomic Boundary Offset: 0 00:18:32.832 NGUID/EUI64 Never Reused: No 00:18:32.832 ANA group ID: 1 00:18:32.832 Namespace Write Protected: No 00:18:32.832 Number of LBA Formats: 1 00:18:32.832 Current LBA Format: LBA Format #00 00:18:32.832 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:32.832 00:18:32.832 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:32.832 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:32.832 14:06:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:32.832 rmmod nvme_tcp 00:18:32.832 rmmod nvme_fabrics 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.832 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.090 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:33.090 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:33.090 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:33.090 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:18:33.090 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:33.090 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:33.090 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:33.090 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:33.091 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:33.091 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:33.091 14:06:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:33.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:33.913 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:33.913 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:33.913 00:18:33.913 real 0m3.109s 00:18:33.913 user 0m1.029s 00:18:33.913 sys 0m1.633s 00:18:33.913 14:06:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:33.913 14:06:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.913 ************************************ 00:18:33.913 END TEST nvmf_identify_kernel_target 00:18:33.913 ************************************ 00:18:33.913 14:06:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:33.913 14:06:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:33.913 14:06:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:33.913 14:06:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.172 ************************************ 00:18:34.172 START TEST nvmf_auth_host 00:18:34.172 ************************************ 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:34.172 * Looking for test storage... 00:18:34.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.172 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:34.173 Cannot find device "nvmf_tgt_br" 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.173 Cannot find device "nvmf_tgt_br2" 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:34.173 Cannot find device "nvmf_tgt_br" 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:34.173 Cannot find device "nvmf_tgt_br2" 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:34.173 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:34.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:18:34.431 00:18:34.431 --- 10.0.0.2 ping statistics --- 00:18:34.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.431 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:34.431 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:34.431 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:18:34.431 00:18:34.431 --- 10.0.0.3 ping statistics --- 00:18:34.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.431 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:34.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:18:34.431 00:18:34.431 --- 10.0.0.1 ping statistics --- 00:18:34.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.431 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=77694 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 77694 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77694 ']' 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.431 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=70268116022c95aa8cfa0cc5fdcbd9a3 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4ql 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 70268116022c95aa8cfa0cc5fdcbd9a3 0 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 70268116022c95aa8cfa0cc5fdcbd9a3 0 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=70268116022c95aa8cfa0cc5fdcbd9a3 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4ql 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4ql 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.4ql 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5592c874110294015774b1ac070455d4b0792f20a37d23bfd1cfb7d618c0ccb8 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.C09 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5592c874110294015774b1ac070455d4b0792f20a37d23bfd1cfb7d618c0ccb8 3 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5592c874110294015774b1ac070455d4b0792f20a37d23bfd1cfb7d618c0ccb8 3 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5592c874110294015774b1ac070455d4b0792f20a37d23bfd1cfb7d618c0ccb8 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.C09 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.C09 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.C09 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=15787e3dad49d9044a980671c4e19f28d56ac90f62194a82 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Ssx 00:18:35.806 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 15787e3dad49d9044a980671c4e19f28d56ac90f62194a82 0 00:18:35.807 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 15787e3dad49d9044a980671c4e19f28d56ac90f62194a82 0 00:18:35.807 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.807 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.807 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=15787e3dad49d9044a980671c4e19f28d56ac90f62194a82 00:18:35.807 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:35.807 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:35.807 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Ssx 00:18:35.807 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Ssx 00:18:35.807 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Ssx 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=201ee74c6ec76eb4a45dbc70980c168a5ab67ab1177287f2 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZCF 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 201ee74c6ec76eb4a45dbc70980c168a5ab67ab1177287f2 2 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 201ee74c6ec76eb4a45dbc70980c168a5ab67ab1177287f2 2 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=201ee74c6ec76eb4a45dbc70980c168a5ab67ab1177287f2 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZCF 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZCF 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ZCF 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8d138b69e5429cd57ea10fdda328984b 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rcx 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8d138b69e5429cd57ea10fdda328984b 1 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8d138b69e5429cd57ea10fdda328984b 1 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8d138b69e5429cd57ea10fdda328984b 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:35.807 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rcx 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rcx 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.rcx 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=52bb931d8e7d05da7841688958fc4b0b 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4Iu 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 52bb931d8e7d05da7841688958fc4b0b 1 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 52bb931d8e7d05da7841688958fc4b0b 1 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=52bb931d8e7d05da7841688958fc4b0b 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4Iu 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4Iu 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.4Iu 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eea1feac1b208c92775a26fcbdc323de44627a485c969079 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Odx 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eea1feac1b208c92775a26fcbdc323de44627a485c969079 2 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eea1feac1b208c92775a26fcbdc323de44627a485c969079 2 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eea1feac1b208c92775a26fcbdc323de44627a485c969079 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Odx 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Odx 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Odx 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3a147e674dbf9e5c6e09253770fdd5ef 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5h8 00:18:36.065 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3a147e674dbf9e5c6e09253770fdd5ef 0 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3a147e674dbf9e5c6e09253770fdd5ef 0 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3a147e674dbf9e5c6e09253770fdd5ef 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5h8 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5h8 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.5h8 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b04638f639fcb78cb4ba934c79c6fd56ad0c8c6c4027e880d364728b7701cbe3 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bzg 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b04638f639fcb78cb4ba934c79c6fd56ad0c8c6c4027e880d364728b7701cbe3 3 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b04638f639fcb78cb4ba934c79c6fd56ad0c8c6c4027e880d364728b7701cbe3 3 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b04638f639fcb78cb4ba934c79c6fd56ad0c8c6c4027e880d364728b7701cbe3 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:36.066 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:36.324 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bzg 00:18:36.324 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bzg 00:18:36.324 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.bzg 00:18:36.324 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:36.324 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77694 00:18:36.324 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77694 ']' 00:18:36.324 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.324 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.324 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.324 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.324 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4ql 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.C09 ]] 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.C09 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Ssx 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ZCF ]] 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZCF 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.rcx 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.4Iu ]] 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4Iu 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.582 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Odx 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.5h8 ]] 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.5h8 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bzg 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:36.583 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:37.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:37.148 Waiting for block devices as requested 00:18:37.148 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:37.408 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:37.974 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:37.974 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:37.974 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:37.974 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:37.974 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:37.974 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:37.974 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:37.974 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:37.974 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:37.974 No valid GPT data, bailing 00:18:37.974 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:37.974 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:37.975 No valid GPT data, bailing 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:37.975 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:38.234 No valid GPT data, bailing 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:38.234 No valid GPT data, bailing 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:38.234 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid=ae1cc223-8955-4554-9c53-a88c4ce7ab12 -a 10.0.0.1 -t tcp -s 4420 00:18:38.234 00:18:38.234 Discovery Log Number of Records 2, Generation counter 2 00:18:38.234 =====Discovery Log Entry 0====== 00:18:38.235 trtype: tcp 00:18:38.235 adrfam: ipv4 00:18:38.235 subtype: current discovery subsystem 00:18:38.235 treq: not specified, sq flow control disable supported 00:18:38.235 portid: 1 00:18:38.235 trsvcid: 4420 00:18:38.235 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:38.235 traddr: 10.0.0.1 00:18:38.235 eflags: none 00:18:38.235 sectype: none 00:18:38.235 =====Discovery Log Entry 1====== 00:18:38.235 trtype: tcp 00:18:38.235 adrfam: ipv4 00:18:38.235 subtype: nvme subsystem 00:18:38.235 treq: not specified, sq flow control disable supported 00:18:38.235 portid: 1 00:18:38.235 trsvcid: 4420 00:18:38.235 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:38.235 traddr: 10.0.0.1 00:18:38.235 eflags: none 00:18:38.235 sectype: none 00:18:38.235 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:38.235 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:38.235 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:38.235 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:38.235 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.235 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.235 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:38.235 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:38.235 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:38.235 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:38.235 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.235 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.494 nvme0n1 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.494 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.753 nvme0n1 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.753 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.753 nvme0n1 00:18:38.753 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.753 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.753 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.753 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.753 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.753 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.013 nvme0n1 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.013 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.272 nvme0n1 00:18:39.272 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.272 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.272 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.272 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.272 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.272 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.272 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.272 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.273 nvme0n1 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.273 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.532 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.532 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.532 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.532 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.532 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.532 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.532 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.532 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:39.532 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.532 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.532 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:39.532 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:39.533 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:39.533 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:39.533 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.533 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.793 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.793 nvme0n1 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:39.793 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:39.794 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.794 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:39.794 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:39.794 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:39.794 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.794 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.794 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.794 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.054 nvme0n1 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.054 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.313 nvme0n1 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.313 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.314 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.574 nvme0n1 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.574 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.833 nvme0n1 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:40.833 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:41.419 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:41.420 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:41.420 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.420 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.420 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.420 nvme0n1 00:18:41.420 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.420 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.420 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.420 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.420 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.420 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.680 nvme0n1 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.680 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.939 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.940 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.199 nvme0n1 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:42.199 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:42.200 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.200 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.200 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:42.200 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.200 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:42.200 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:42.200 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:42.200 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:42.200 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.200 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.459 nvme0n1 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.459 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.718 nvme0n1 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:42.718 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:42.719 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:42.719 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.620 nvme0n1 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.620 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.621 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.187 nvme0n1 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.187 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.446 nvme0n1 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.446 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.015 nvme0n1 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:46.015 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.016 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.276 nvme0n1 00:18:46.276 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.276 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.276 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.276 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.276 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.276 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.276 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.276 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.276 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.276 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.536 14:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.105 nvme0n1 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.105 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.674 nvme0n1 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.674 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.242 nvme0n1 00:18:48.242 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.242 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.242 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.242 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.242 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.242 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.501 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.502 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.502 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:48.502 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.502 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.071 nvme0n1 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.071 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.659 nvme0n1 00:18:49.659 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.659 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.659 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.659 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.659 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.659 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.659 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.659 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.659 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.660 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.919 nvme0n1 00:18:49.919 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.919 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.919 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.919 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.919 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.919 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.919 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.919 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.919 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.919 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.919 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.919 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.919 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:49.919 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.919 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:49.919 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:49.919 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.920 nvme0n1 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.920 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:50.181 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.182 nvme0n1 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.182 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.441 nvme0n1 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:50.441 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.442 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.442 nvme0n1 00:18:50.442 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.442 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.442 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.442 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.442 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:50.701 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.702 nvme0n1 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.702 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.961 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.962 nvme0n1 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.962 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.222 nvme0n1 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.222 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.482 nvme0n1 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:51.482 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.483 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.742 nvme0n1 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.742 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.743 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.001 nvme0n1 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.001 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.260 nvme0n1 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.260 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.518 nvme0n1 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.518 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.777 nvme0n1 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.777 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.035 nvme0n1 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.035 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.601 nvme0n1 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.601 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.602 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.602 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.602 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.861 nvme0n1 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.861 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.122 nvme0n1 00:18:54.122 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.122 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.122 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.122 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.122 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.382 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.642 nvme0n1 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.642 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.210 nvme0n1 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.210 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.778 nvme0n1 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.778 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.374 nvme0n1 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.374 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.943 nvme0n1 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.512 nvme0n1 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.512 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.080 nvme0n1 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.080 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.338 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.338 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.338 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.338 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.338 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.338 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.338 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.338 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.338 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.338 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.338 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.339 nvme0n1 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.339 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.598 nvme0n1 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.598 nvme0n1 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.598 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.857 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.857 nvme0n1 00:18:58.857 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.857 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.857 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.857 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.857 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.857 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.857 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.857 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.857 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.857 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.857 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.857 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.858 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.117 nvme0n1 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.117 nvme0n1 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.117 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.377 nvme0n1 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:59.377 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.378 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.638 nvme0n1 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.638 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.959 nvme0n1 00:18:59.959 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.959 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.959 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.959 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.959 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.959 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.959 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.959 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.959 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.959 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.959 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.959 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.960 nvme0n1 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.960 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.219 nvme0n1 00:19:00.219 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.220 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.479 nvme0n1 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.479 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.739 nvme0n1 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.739 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.740 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.998 nvme0n1 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.998 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.257 nvme0n1 00:19:01.257 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.257 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.257 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.258 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.826 nvme0n1 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.826 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.085 nvme0n1 00:19:02.085 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.085 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:02.085 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.085 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.085 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.085 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:02.343 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.344 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.602 nvme0n1 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:02.602 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:02.603 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:02.603 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.603 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.169 nvme0n1 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.169 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.170 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:03.170 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.170 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:03.170 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:03.170 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:03.170 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:03.170 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.170 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.429 nvme0n1 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAyNjgxMTYwMjJjOTVhYThjZmEwY2M1ZmRjYmQ5YTPmVsOK: 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: ]] 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTU5MmM4NzQxMTAyOTQwMTU3NzRiMWFjMDcwNDU1ZDRiMDc5MmYyMGEzN2QyM2JmZDFjZmI3ZDYxOGMwY2NiOJ7HI14=: 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.429 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.688 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.256 nvme0n1 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.257 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.825 nvme0n1 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQxMzhiNjllNTQyOWNkNTdlYTEwZmRkYTMyODk4NGJ/rO4J: 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: ]] 00:19:04.825 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJiYjkzMWQ4ZTdkMDVkYTc4NDE2ODg5NThmYzRiMGJWZnUV: 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.826 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.763 nvme0n1 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWVhMWZlYWMxYjIwOGM5Mjc3NWEyNmZjYmRjMzIzZGU0NDYyN2E0ODVjOTY5MDc5I1pUQQ==: 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: ]] 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ExNDdlNjc0ZGJmOWU1YzZlMDkyNTM3NzBmZGQ1ZWbEngaG: 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.763 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.330 nvme0n1 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjA0NjM4ZjYzOWZjYjc4Y2I0YmE5MzRjNzljNmZkNTZhZDBjOGM2YzQwMjdlODgwZDM2NDcyOGI3NzAxY2JlM2E6hHc=: 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.330 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.954 nvme0n1 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU3ODdlM2RhZDQ5ZDkwNDRhOTgwNjcxYzRlMTlmMjhkNTZhYzkwZjYyMTk0YTgyFlasiw==: 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: ]] 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjAxZWU3NGM2ZWM3NmViNGE0NWRiYzcwOTgwYzE2OGE1YWI2N2FiMTE3NzI4N2YyJt3ftw==: 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.954 request: 00:19:06.954 { 00:19:06.954 "name": "nvme0", 00:19:06.954 "trtype": "tcp", 00:19:06.954 "traddr": "10.0.0.1", 00:19:06.954 "adrfam": "ipv4", 00:19:06.954 "trsvcid": "4420", 00:19:06.954 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:06.954 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:06.954 "prchk_reftag": false, 00:19:06.954 "prchk_guard": false, 00:19:06.954 "hdgst": false, 00:19:06.954 "ddgst": false, 00:19:06.954 "method": "bdev_nvme_attach_controller", 00:19:06.954 "req_id": 1 00:19:06.954 } 00:19:06.954 Got JSON-RPC error response 00:19:06.954 response: 00:19:06.954 { 00:19:06.954 "code": -5, 00:19:06.954 "message": "Input/output error" 00:19:06.954 } 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:06.954 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.955 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.213 request: 00:19:07.213 { 00:19:07.213 "name": "nvme0", 00:19:07.213 "trtype": "tcp", 00:19:07.213 "traddr": "10.0.0.1", 00:19:07.213 "adrfam": "ipv4", 00:19:07.213 "trsvcid": "4420", 00:19:07.213 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:07.213 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:07.213 "prchk_reftag": false, 00:19:07.213 "prchk_guard": false, 00:19:07.213 "hdgst": false, 00:19:07.213 "ddgst": false, 00:19:07.213 "dhchap_key": "key2", 00:19:07.213 "method": "bdev_nvme_attach_controller", 00:19:07.213 "req_id": 1 00:19:07.213 } 00:19:07.213 Got JSON-RPC error response 00:19:07.213 response: 00:19:07.213 { 00:19:07.213 "code": -5, 00:19:07.213 "message": "Input/output error" 00:19:07.213 } 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.213 request: 00:19:07.213 { 00:19:07.213 "name": "nvme0", 00:19:07.213 "trtype": "tcp", 00:19:07.213 "traddr": "10.0.0.1", 00:19:07.213 "adrfam": "ipv4", 00:19:07.213 "trsvcid": "4420", 00:19:07.213 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:07.213 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:07.213 "prchk_reftag": false, 00:19:07.213 "prchk_guard": false, 00:19:07.213 "hdgst": false, 00:19:07.213 "ddgst": false, 00:19:07.213 "dhchap_key": "key1", 00:19:07.213 "dhchap_ctrlr_key": "ckey2", 00:19:07.213 "method": "bdev_nvme_attach_controller", 00:19:07.213 "req_id": 1 00:19:07.213 } 00:19:07.213 Got JSON-RPC error response 00:19:07.213 response: 00:19:07.213 { 00:19:07.213 "code": -5, 00:19:07.213 "message": "Input/output error" 00:19:07.213 } 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:07.213 rmmod nvme_tcp 00:19:07.213 rmmod nvme_fabrics 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 77694 ']' 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 77694 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 77694 ']' 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 77694 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77694 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:07.213 killing process with pid 77694 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77694' 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 77694 00:19:07.213 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 77694 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:07.471 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:07.728 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:08.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:08.553 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:08.553 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:08.553 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.4ql /tmp/spdk.key-null.Ssx /tmp/spdk.key-sha256.rcx /tmp/spdk.key-sha384.Odx /tmp/spdk.key-sha512.bzg /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:08.553 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:09.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:09.122 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:09.122 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:09.122 00:19:09.122 real 0m35.163s 00:19:09.122 user 0m31.658s 00:19:09.122 sys 0m4.586s 00:19:09.122 ************************************ 00:19:09.122 END TEST nvmf_auth_host 00:19:09.122 ************************************ 00:19:09.122 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:09.122 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.382 ************************************ 00:19:09.382 START TEST nvmf_digest 00:19:09.382 ************************************ 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:09.382 * Looking for test storage... 00:19:09.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:09.382 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:09.383 Cannot find device "nvmf_tgt_br" 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.383 Cannot find device "nvmf_tgt_br2" 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:09.383 Cannot find device "nvmf_tgt_br" 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:19:09.383 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:09.383 Cannot find device "nvmf_tgt_br2" 00:19:09.641 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:19:09.641 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:09.641 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:09.641 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:09.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.641 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:09.641 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:09.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.641 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:09.641 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:09.641 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:09.641 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:09.641 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:09.642 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:09.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:19:09.900 00:19:09.900 --- 10.0.0.2 ping statistics --- 00:19:09.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.900 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:09.900 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:09.900 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:19:09.900 00:19:09.900 --- 10.0.0.3 ping statistics --- 00:19:09.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.900 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:09.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:09.900 00:19:09.900 --- 10.0.0.1 ping statistics --- 00:19:09.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.900 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:09.900 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:09.900 ************************************ 00:19:09.900 START TEST nvmf_digest_clean 00:19:09.900 ************************************ 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=79271 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 79271 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79271 ']' 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:09.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:09.900 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:09.901 [2024-07-25 14:07:19.075850] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:19:09.901 [2024-07-25 14:07:19.075927] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.901 [2024-07-25 14:07:19.201729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.161 [2024-07-25 14:07:19.319725] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.161 [2024-07-25 14:07:19.319965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.161 [2024-07-25 14:07:19.320035] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.161 [2024-07-25 14:07:19.320090] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.161 [2024-07-25 14:07:19.320132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.161 [2024-07-25 14:07:19.320208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.727 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.727 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:19:10.727 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:10.727 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:10.727 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:10.727 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.727 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:10.727 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:10.727 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:10.727 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.727 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:10.985 [2024-07-25 14:07:20.069461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:10.985 null0 00:19:10.985 [2024-07-25 14:07:20.114799] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.985 [2024-07-25 14:07:20.138872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.985 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.985 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:10.985 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:10.985 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:10.985 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:10.985 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:10.985 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:10.985 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:10.985 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79303 00:19:10.985 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79303 /var/tmp/bperf.sock 00:19:10.985 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:10.986 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79303 ']' 00:19:10.986 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:10.986 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:10.986 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:10.986 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.986 14:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:10.986 [2024-07-25 14:07:20.195737] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:19:10.986 [2024-07-25 14:07:20.195814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79303 ] 00:19:11.244 [2024-07-25 14:07:20.322214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.244 [2024-07-25 14:07:20.433132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.179 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.179 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:19:12.179 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:12.179 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:12.179 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:12.179 [2024-07-25 14:07:21.392918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:12.179 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:12.179 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:12.438 nvme0n1 00:19:12.438 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:12.438 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:12.697 Running I/O for 2 seconds... 00:19:14.599 00:19:14.599 Latency(us) 00:19:14.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.599 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:14.599 nvme0n1 : 2.01 18786.75 73.39 0.00 0.00 6808.27 6238.80 19689.42 00:19:14.599 =================================================================================================================== 00:19:14.599 Total : 18786.75 73.39 0.00 0.00 6808.27 6238.80 19689.42 00:19:14.599 0 00:19:14.599 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:14.599 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:14.599 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:14.599 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:14.599 | select(.opcode=="crc32c") 00:19:14.599 | "\(.module_name) \(.executed)"' 00:19:14.599 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:14.857 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:14.857 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:14.857 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:14.857 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:14.857 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79303 00:19:14.857 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79303 ']' 00:19:14.857 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79303 00:19:14.857 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:19:14.857 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.857 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79303 00:19:14.857 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:14.857 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:14.857 killing process with pid 79303 00:19:14.857 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79303' 00:19:14.857 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79303 00:19:14.857 Received shutdown signal, test time was about 2.000000 seconds 00:19:14.857 00:19:14.857 Latency(us) 00:19:14.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.857 =================================================================================================================== 00:19:14.857 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.857 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79303 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79364 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79364 /var/tmp/bperf.sock 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79364 ']' 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.116 14:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:15.116 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:15.116 Zero copy mechanism will not be used. 00:19:15.116 [2024-07-25 14:07:24.259075] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:19:15.116 [2024-07-25 14:07:24.259137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79364 ] 00:19:15.116 [2024-07-25 14:07:24.396321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.374 [2024-07-25 14:07:24.482079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.947 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.947 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:19:15.947 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:15.947 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:15.947 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:16.206 [2024-07-25 14:07:25.341874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:16.206 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:16.206 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:16.464 nvme0n1 00:19:16.464 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:16.464 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:16.464 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:16.464 Zero copy mechanism will not be used. 00:19:16.464 Running I/O for 2 seconds... 00:19:18.996 00:19:18.996 Latency(us) 00:19:18.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.996 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:18.996 nvme0n1 : 2.00 7736.40 967.05 0.00 0.00 2065.19 1717.10 5179.92 00:19:18.996 =================================================================================================================== 00:19:18.996 Total : 7736.40 967.05 0.00 0.00 2065.19 1717.10 5179.92 00:19:18.996 0 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:18.996 | select(.opcode=="crc32c") 00:19:18.996 | "\(.module_name) \(.executed)"' 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79364 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79364 ']' 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79364 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:18.996 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79364 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79364' 00:19:18.996 killing process with pid 79364 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79364 00:19:18.996 Received shutdown signal, test time was about 2.000000 seconds 00:19:18.996 00:19:18.996 Latency(us) 00:19:18.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.996 =================================================================================================================== 00:19:18.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79364 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79419 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79419 /var/tmp/bperf.sock 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79419 ']' 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:18.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.996 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:18.996 [2024-07-25 14:07:28.260914] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:19:18.997 [2024-07-25 14:07:28.260989] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79419 ] 00:19:19.264 [2024-07-25 14:07:28.398624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.264 [2024-07-25 14:07:28.499788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.829 14:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.829 14:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:19:19.829 14:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:19.829 14:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:19.829 14:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:20.087 [2024-07-25 14:07:29.361005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:20.346 14:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:20.346 14:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:20.605 nvme0n1 00:19:20.605 14:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:20.605 14:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:20.605 Running I/O for 2 seconds... 00:19:22.507 00:19:22.507 Latency(us) 00:19:22.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.507 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:22.507 nvme0n1 : 2.00 18277.54 71.40 0.00 0.00 6997.09 2818.91 13679.57 00:19:22.507 =================================================================================================================== 00:19:22.507 Total : 18277.54 71.40 0.00 0.00 6997.09 2818.91 13679.57 00:19:22.507 0 00:19:22.764 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:22.764 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:22.764 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:22.764 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:22.764 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:22.764 | select(.opcode=="crc32c") 00:19:22.764 | "\(.module_name) \(.executed)"' 00:19:22.764 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:22.764 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:22.764 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:22.764 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:22.764 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79419 00:19:22.765 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79419 ']' 00:19:22.765 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79419 00:19:22.765 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:19:22.765 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:22.765 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79419 00:19:23.023 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:23.023 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:23.023 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79419' 00:19:23.023 killing process with pid 79419 00:19:23.023 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79419 00:19:23.023 Received shutdown signal, test time was about 2.000000 seconds 00:19:23.023 00:19:23.023 Latency(us) 00:19:23.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.023 =================================================================================================================== 00:19:23.023 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.023 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79419 00:19:23.023 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:23.023 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:23.023 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:23.023 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:23.023 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:23.023 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:23.024 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:23.024 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:23.024 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79479 00:19:23.024 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79479 /var/tmp/bperf.sock 00:19:23.024 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79479 ']' 00:19:23.024 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:23.024 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:23.024 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:23.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:23.024 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:23.024 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:23.284 [2024-07-25 14:07:32.331191] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:19:23.284 [2024-07-25 14:07:32.331407] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:19:23.284 Zero copy mechanism will not be used. 00:19:23.284 llocations --file-prefix=spdk_pid79479 ] 00:19:23.284 [2024-07-25 14:07:32.476722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.284 [2024-07-25 14:07:32.573176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.232 14:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:24.232 14:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:19:24.232 14:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:24.233 14:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:24.233 14:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:24.233 [2024-07-25 14:07:33.429864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:24.233 14:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:24.233 14:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:24.519 nvme0n1 00:19:24.519 14:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:24.519 14:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:24.777 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:24.777 Zero copy mechanism will not be used. 00:19:24.777 Running I/O for 2 seconds... 00:19:26.681 00:19:26.681 Latency(us) 00:19:26.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.681 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:26.681 nvme0n1 : 2.00 7641.64 955.21 0.00 0.00 2089.68 1438.07 7383.53 00:19:26.681 =================================================================================================================== 00:19:26.681 Total : 7641.64 955.21 0.00 0.00 2089.68 1438.07 7383.53 00:19:26.681 0 00:19:26.681 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:26.681 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:26.681 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:26.681 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:26.681 | select(.opcode=="crc32c") 00:19:26.681 | "\(.module_name) \(.executed)"' 00:19:26.681 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:26.954 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:26.954 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:26.954 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:26.954 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:26.954 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79479 00:19:26.955 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79479 ']' 00:19:26.955 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79479 00:19:26.955 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:19:26.955 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.955 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79479 00:19:26.955 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:26.955 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:26.955 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79479' 00:19:26.955 killing process with pid 79479 00:19:26.955 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79479 00:19:26.955 Received shutdown signal, test time was about 2.000000 seconds 00:19:26.955 00:19:26.955 Latency(us) 00:19:26.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.955 =================================================================================================================== 00:19:26.955 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.955 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79479 00:19:27.214 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79271 00:19:27.214 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79271 ']' 00:19:27.214 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79271 00:19:27.214 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:19:27.214 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.214 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79271 00:19:27.214 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:27.214 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:27.214 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79271' 00:19:27.214 killing process with pid 79271 00:19:27.214 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79271 00:19:27.214 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79271 00:19:27.473 ************************************ 00:19:27.473 END TEST nvmf_digest_clean 00:19:27.473 ************************************ 00:19:27.473 00:19:27.473 real 0m17.591s 00:19:27.473 user 0m33.800s 00:19:27.473 sys 0m4.303s 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:27.473 ************************************ 00:19:27.473 START TEST nvmf_digest_error 00:19:27.473 ************************************ 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=79563 00:19:27.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 79563 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79563 ']' 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.473 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:27.473 [2024-07-25 14:07:36.725421] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:19:27.473 [2024-07-25 14:07:36.725496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.732 [2024-07-25 14:07:36.866740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.732 [2024-07-25 14:07:36.972100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.732 [2024-07-25 14:07:36.972147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.732 [2024-07-25 14:07:36.972154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.732 [2024-07-25 14:07:36.972160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.732 [2024-07-25 14:07:36.972165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.732 [2024-07-25 14:07:36.972187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:28.666 [2024-07-25 14:07:37.659281] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:28.666 [2024-07-25 14:07:37.710008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:28.666 null0 00:19:28.666 [2024-07-25 14:07:37.755382] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.666 [2024-07-25 14:07:37.779448] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79596 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79596 /var/tmp/bperf.sock 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79596 ']' 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:28.666 14:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:28.666 [2024-07-25 14:07:37.835387] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:19:28.666 [2024-07-25 14:07:37.835546] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79596 ] 00:19:28.925 [2024-07-25 14:07:37.972498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.925 [2024-07-25 14:07:38.077888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.925 [2024-07-25 14:07:38.121305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:29.495 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.495 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:19:29.495 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:29.495 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:29.754 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:29.754 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.754 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:29.754 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.754 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:29.754 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:30.013 nvme0n1 00:19:30.013 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:30.013 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.013 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:30.013 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.013 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:30.013 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:30.013 Running I/O for 2 seconds... 00:19:30.273 [2024-07-25 14:07:39.337052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.337212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.337263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.354207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.354371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.354418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.371044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.371188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.371237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.387772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.387918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.387968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.404552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.404688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.404739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.421076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.421208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.421255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.437649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.437768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.437779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.454160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.454214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.454225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.470515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.470567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.470578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.486879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.486932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.486943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.503228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.503278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.503289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.519529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.519580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.519591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.535884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.535938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.535949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.552332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.552388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.273 [2024-07-25 14:07:39.552398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.273 [2024-07-25 14:07:39.568794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.273 [2024-07-25 14:07:39.568852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.274 [2024-07-25 14:07:39.568863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.533 [2024-07-25 14:07:39.585304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.533 [2024-07-25 14:07:39.585358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.533 [2024-07-25 14:07:39.585367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.533 [2024-07-25 14:07:39.601838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.533 [2024-07-25 14:07:39.601894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.533 [2024-07-25 14:07:39.601904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.533 [2024-07-25 14:07:39.618424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.533 [2024-07-25 14:07:39.618480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.533 [2024-07-25 14:07:39.618490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.533 [2024-07-25 14:07:39.634902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.533 [2024-07-25 14:07:39.634959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.533 [2024-07-25 14:07:39.634969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.533 [2024-07-25 14:07:39.651506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.533 [2024-07-25 14:07:39.651567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.533 [2024-07-25 14:07:39.651578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.533 [2024-07-25 14:07:39.668109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.533 [2024-07-25 14:07:39.668173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.534 [2024-07-25 14:07:39.668184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.534 [2024-07-25 14:07:39.684774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.534 [2024-07-25 14:07:39.684838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.534 [2024-07-25 14:07:39.684850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.534 [2024-07-25 14:07:39.701541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.534 [2024-07-25 14:07:39.701604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.534 [2024-07-25 14:07:39.701618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.534 [2024-07-25 14:07:39.718278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.534 [2024-07-25 14:07:39.718347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.534 [2024-07-25 14:07:39.718359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.534 [2024-07-25 14:07:39.734973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.534 [2024-07-25 14:07:39.735040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.534 [2024-07-25 14:07:39.735052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.534 [2024-07-25 14:07:39.751736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.534 [2024-07-25 14:07:39.751798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.534 [2024-07-25 14:07:39.751809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.534 [2024-07-25 14:07:39.768706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.534 [2024-07-25 14:07:39.768772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.534 [2024-07-25 14:07:39.768783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.534 [2024-07-25 14:07:39.785499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.534 [2024-07-25 14:07:39.785574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.534 [2024-07-25 14:07:39.785586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.534 [2024-07-25 14:07:39.802261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.534 [2024-07-25 14:07:39.802333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.534 [2024-07-25 14:07:39.802347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.534 [2024-07-25 14:07:39.818878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.534 [2024-07-25 14:07:39.818938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.534 [2024-07-25 14:07:39.818949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.534 [2024-07-25 14:07:39.835355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.534 [2024-07-25 14:07:39.835410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.534 [2024-07-25 14:07:39.835422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.794 [2024-07-25 14:07:39.851961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.794 [2024-07-25 14:07:39.852010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.794 [2024-07-25 14:07:39.852020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.794 [2024-07-25 14:07:39.868649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.794 [2024-07-25 14:07:39.868703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.794 [2024-07-25 14:07:39.868715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.794 [2024-07-25 14:07:39.885279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.794 [2024-07-25 14:07:39.885347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.794 [2024-07-25 14:07:39.885358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.794 [2024-07-25 14:07:39.901901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.794 [2024-07-25 14:07:39.901964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.794 [2024-07-25 14:07:39.901975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.794 [2024-07-25 14:07:39.918369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.794 [2024-07-25 14:07:39.918427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.794 [2024-07-25 14:07:39.918438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.794 [2024-07-25 14:07:39.934832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.794 [2024-07-25 14:07:39.934891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.794 [2024-07-25 14:07:39.934902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.794 [2024-07-25 14:07:39.951355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.794 [2024-07-25 14:07:39.951402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.794 [2024-07-25 14:07:39.951413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.794 [2024-07-25 14:07:39.968099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.794 [2024-07-25 14:07:39.968153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.795 [2024-07-25 14:07:39.968163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.795 [2024-07-25 14:07:39.984854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.795 [2024-07-25 14:07:39.984911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.795 [2024-07-25 14:07:39.984923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.795 [2024-07-25 14:07:40.001543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.795 [2024-07-25 14:07:40.001599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.795 [2024-07-25 14:07:40.001610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.795 [2024-07-25 14:07:40.018195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.795 [2024-07-25 14:07:40.018256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.795 [2024-07-25 14:07:40.018267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.795 [2024-07-25 14:07:40.034756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.795 [2024-07-25 14:07:40.034814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.795 [2024-07-25 14:07:40.034825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.795 [2024-07-25 14:07:40.051322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.795 [2024-07-25 14:07:40.051374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.795 [2024-07-25 14:07:40.051386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.795 [2024-07-25 14:07:40.067854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.795 [2024-07-25 14:07:40.067908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.795 [2024-07-25 14:07:40.067919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.795 [2024-07-25 14:07:40.084220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:30.795 [2024-07-25 14:07:40.084276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.795 [2024-07-25 14:07:40.084287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.100641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.100692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.100703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.117110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.117172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.117182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.133914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.133986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.133998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.150667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.150734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.150745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.167319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.167379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.167389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.184121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.184190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.184206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.201117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.201184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.201196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.217991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.218060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.218071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.234748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.234812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.234823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.251171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.251231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.251241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.267660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.267724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.267736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.284350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.284414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.284425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.301356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.301420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.301431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.318306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.318369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.318380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.334829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.334891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.334902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.055 [2024-07-25 14:07:40.351460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.055 [2024-07-25 14:07:40.351520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.055 [2024-07-25 14:07:40.351531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.314 [2024-07-25 14:07:40.367969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.314 [2024-07-25 14:07:40.368027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.314 [2024-07-25 14:07:40.368037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.314 [2024-07-25 14:07:40.391481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.314 [2024-07-25 14:07:40.391542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.314 [2024-07-25 14:07:40.391553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.314 [2024-07-25 14:07:40.407816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.314 [2024-07-25 14:07:40.407873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.314 [2024-07-25 14:07:40.407883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.315 [2024-07-25 14:07:40.424248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.315 [2024-07-25 14:07:40.424320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.315 [2024-07-25 14:07:40.424331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.315 [2024-07-25 14:07:40.440616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.315 [2024-07-25 14:07:40.440671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.315 [2024-07-25 14:07:40.440681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.315 [2024-07-25 14:07:40.456961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.315 [2024-07-25 14:07:40.457018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.315 [2024-07-25 14:07:40.457029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.315 [2024-07-25 14:07:40.473345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.315 [2024-07-25 14:07:40.473400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.315 [2024-07-25 14:07:40.473410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.315 [2024-07-25 14:07:40.489733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.315 [2024-07-25 14:07:40.489794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.315 [2024-07-25 14:07:40.489805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.315 [2024-07-25 14:07:40.506133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.315 [2024-07-25 14:07:40.506188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.315 [2024-07-25 14:07:40.506198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.315 [2024-07-25 14:07:40.522586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.315 [2024-07-25 14:07:40.522645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.315 [2024-07-25 14:07:40.522655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.315 [2024-07-25 14:07:40.539046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.315 [2024-07-25 14:07:40.539107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.315 [2024-07-25 14:07:40.539117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.315 [2024-07-25 14:07:40.555602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.315 [2024-07-25 14:07:40.555659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.315 [2024-07-25 14:07:40.555670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.315 [2024-07-25 14:07:40.571924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.315 [2024-07-25 14:07:40.571975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.315 [2024-07-25 14:07:40.571985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.315 [2024-07-25 14:07:40.588240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.315 [2024-07-25 14:07:40.588293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.315 [2024-07-25 14:07:40.588317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.315 [2024-07-25 14:07:40.604599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.315 [2024-07-25 14:07:40.604652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.315 [2024-07-25 14:07:40.604663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.620990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.621052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.621063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.637412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.637466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.637476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.653737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.653790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.653801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.670014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.670070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.670080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.686308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.686361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.686371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.702548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.702596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.702606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.718932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.718989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.719000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.735438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.735496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.735506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.751951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.752010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.752020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.768512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.768574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.768584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.785059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.785118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.785128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.801625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.801691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.801701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.818169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.818231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.818241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.834776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.834835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.834846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.851426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.851487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.851497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.574 [2024-07-25 14:07:40.867935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.574 [2024-07-25 14:07:40.867984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-25 14:07:40.867994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:40.884479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:40.884520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:40.884531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:40.900946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:40.900996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:40.901006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:40.917559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:40.917616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:40.917626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:40.934112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:40.934168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:40.934179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:40.950543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:40.950596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:40.950606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:40.966849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:40.966901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:40.966912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:40.983164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:40.983221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:40.983232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:40.999611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:40.999667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:40.999677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:41.015982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:41.016038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:41.016047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:41.031192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:41.031240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:41.031249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:41.045064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:41.045105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:41.045114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:41.059001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:41.059043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:41.059051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.834 [2024-07-25 14:07:41.072709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.834 [2024-07-25 14:07:41.072749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.834 [2024-07-25 14:07:41.072757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.835 [2024-07-25 14:07:41.086432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.835 [2024-07-25 14:07:41.086474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.835 [2024-07-25 14:07:41.086483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.835 [2024-07-25 14:07:41.100820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.835 [2024-07-25 14:07:41.100888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.835 [2024-07-25 14:07:41.100899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.835 [2024-07-25 14:07:41.115698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.835 [2024-07-25 14:07:41.115741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.835 [2024-07-25 14:07:41.115750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.835 [2024-07-25 14:07:41.130773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:31.835 [2024-07-25 14:07:41.130822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.835 [2024-07-25 14:07:41.130830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 [2024-07-25 14:07:41.144836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:32.094 [2024-07-25 14:07:41.144880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.094 [2024-07-25 14:07:41.144890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 [2024-07-25 14:07:41.158674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:32.094 [2024-07-25 14:07:41.158721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.094 [2024-07-25 14:07:41.158731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 [2024-07-25 14:07:41.173021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:32.094 [2024-07-25 14:07:41.173071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.094 [2024-07-25 14:07:41.173079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 [2024-07-25 14:07:41.186952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:32.094 [2024-07-25 14:07:41.186995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.094 [2024-07-25 14:07:41.187005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 [2024-07-25 14:07:41.200830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:32.094 [2024-07-25 14:07:41.200870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.094 [2024-07-25 14:07:41.200879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 [2024-07-25 14:07:41.214465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:32.094 [2024-07-25 14:07:41.214505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.094 [2024-07-25 14:07:41.214513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 [2024-07-25 14:07:41.228679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:32.094 [2024-07-25 14:07:41.228739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.094 [2024-07-25 14:07:41.228748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 [2024-07-25 14:07:41.243481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:32.094 [2024-07-25 14:07:41.243526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.094 [2024-07-25 14:07:41.243536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 [2024-07-25 14:07:41.258301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:32.094 [2024-07-25 14:07:41.258355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.094 [2024-07-25 14:07:41.258365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 [2024-07-25 14:07:41.272637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:32.094 [2024-07-25 14:07:41.272679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.094 [2024-07-25 14:07:41.272688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 [2024-07-25 14:07:41.286955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:32.094 [2024-07-25 14:07:41.287000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.094 [2024-07-25 14:07:41.287008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 [2024-07-25 14:07:41.300755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x155a4f0) 00:19:32.094 [2024-07-25 14:07:41.300801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.094 [2024-07-25 14:07:41.300810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.094 00:19:32.094 Latency(us) 00:19:32.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.094 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:32.094 nvme0n1 : 2.01 15584.26 60.88 0.00 0.00 8207.73 6496.36 31823.59 00:19:32.095 =================================================================================================================== 00:19:32.095 Total : 15584.26 60.88 0.00 0.00 8207.73 6496.36 31823.59 00:19:32.095 0 00:19:32.095 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:32.095 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:32.095 | .driver_specific 00:19:32.095 | .nvme_error 00:19:32.095 | .status_code 00:19:32.095 | .command_transient_transport_error' 00:19:32.095 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:32.095 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:32.354 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 122 > 0 )) 00:19:32.354 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79596 00:19:32.354 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79596 ']' 00:19:32.354 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79596 00:19:32.354 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:19:32.354 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.354 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79596 00:19:32.354 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:32.354 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:32.354 killing process with pid 79596 00:19:32.354 Received shutdown signal, test time was about 2.000000 seconds 00:19:32.354 00:19:32.354 Latency(us) 00:19:32.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.354 =================================================================================================================== 00:19:32.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.354 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79596' 00:19:32.354 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79596 00:19:32.354 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79596 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79650 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79650 /var/tmp/bperf.sock 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79650 ']' 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:32.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.613 14:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:32.613 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:32.613 Zero copy mechanism will not be used. 00:19:32.613 [2024-07-25 14:07:41.832326] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:19:32.613 [2024-07-25 14:07:41.832388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79650 ] 00:19:32.872 [2024-07-25 14:07:41.970951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.872 [2024-07-25 14:07:42.074916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.872 [2024-07-25 14:07:42.119153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:33.808 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:33.808 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:19:33.808 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:33.808 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:33.808 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:33.808 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.808 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:33.808 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.808 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:33.808 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:34.067 nvme0n1 00:19:34.067 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:34.067 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.067 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:34.067 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.067 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:34.068 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:34.068 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:34.068 Zero copy mechanism will not be used. 00:19:34.068 Running I/O for 2 seconds... 00:19:34.068 [2024-07-25 14:07:43.344539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.068 [2024-07-25 14:07:43.344589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.068 [2024-07-25 14:07:43.344598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.068 [2024-07-25 14:07:43.348430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.068 [2024-07-25 14:07:43.348461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.068 [2024-07-25 14:07:43.348468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.068 [2024-07-25 14:07:43.352260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.068 [2024-07-25 14:07:43.352292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.068 [2024-07-25 14:07:43.352313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.068 [2024-07-25 14:07:43.356065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.068 [2024-07-25 14:07:43.356095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.068 [2024-07-25 14:07:43.356102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.068 [2024-07-25 14:07:43.359848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.068 [2024-07-25 14:07:43.359879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.068 [2024-07-25 14:07:43.359886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.068 [2024-07-25 14:07:43.363592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.068 [2024-07-25 14:07:43.363622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.068 [2024-07-25 14:07:43.363629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.068 [2024-07-25 14:07:43.367361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.068 [2024-07-25 14:07:43.367383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.068 [2024-07-25 14:07:43.367390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.068 [2024-07-25 14:07:43.371095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.068 [2024-07-25 14:07:43.371124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.068 [2024-07-25 14:07:43.371132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.374798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.374828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.374835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.378590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.378620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.378627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.382332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.382360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.382368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.386025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.386056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.386063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.389983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.390016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.390024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.393901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.393930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.393937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.397728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.397759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.397766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.401632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.401661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.401669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.405444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.405467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.405474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.409271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.409311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.409319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.413275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.413310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.413318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.417188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.417217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.417224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.421092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.421124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.327 [2024-07-25 14:07:43.421131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.327 [2024-07-25 14:07:43.425129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.327 [2024-07-25 14:07:43.425161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.425169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.429034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.429063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.429070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.432945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.432975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.432983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.436814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.436845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.436852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.440762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.440791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.440799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.444799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.444830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.444838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.448859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.448890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.448898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.452893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.452925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.452933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.456747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.456775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.456783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.460865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.460899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.460908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.464804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.464832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.464839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.468643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.468672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.468679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.472556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.472585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.472592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.476418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.476447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.476454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.480380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.480407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.480415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.484278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.484319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.484327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.488179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.488211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.488219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.492209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.492240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.492247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.496029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.496058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.496066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.499945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.499974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.499981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.503786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.503817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.503825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.507689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.507721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.507729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.511527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.511558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.511565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.515470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.515499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.515506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.519318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.519341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.519348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.523135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.523165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.523172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.328 [2024-07-25 14:07:43.527009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.328 [2024-07-25 14:07:43.527041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.328 [2024-07-25 14:07:43.527050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.530909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.530936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.530943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.534807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.534835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.534843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.538552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.538581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.538588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.542323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.542351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.542358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.546128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.546158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.546166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.549810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.549840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.549848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.553623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.553653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.553661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.557462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.557484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.557491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.561391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.561414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.561422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.565269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.565309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.565317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.569171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.569200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.569207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.573097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.573128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.573136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.577029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.577075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.577083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.581085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.581115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.581123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.585056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.585084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.585091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.589074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.589101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.589108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.592905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.592933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.592941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.596771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.596802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.596811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.600694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.600721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.600728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.604718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.604746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.604754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.608646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.608674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.608681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.612520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.612549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.612556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.616280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.616323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.616331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.620050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.620080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.620087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.623931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.623973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.623980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.329 [2024-07-25 14:07:43.627607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.329 [2024-07-25 14:07:43.627637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.329 [2024-07-25 14:07:43.627644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.631270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.631312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.631319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.635019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.635048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.635056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.638875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.638904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.638911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.642657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.642686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.642693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.646427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.646459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.646467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.650448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.650476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.650483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.654396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.654424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.654431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.658226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.658257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.658264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.662183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.662215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.662223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.666043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.666073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.666080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.669858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.669887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.669895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.673602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.673631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.673638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.677373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.677401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.677408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.681163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.681193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.681201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.684967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.607 [2024-07-25 14:07:43.684998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.607 [2024-07-25 14:07:43.685005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.607 [2024-07-25 14:07:43.688759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.688789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.688796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.692556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.692584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.692591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.696395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.696419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.696426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.700216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.700244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.700251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.703998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.704026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.704033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.707768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.707796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.707803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.711580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.711607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.711615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.715283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.715323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.715330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.719048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.719075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.719082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.722892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.722920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.722927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.726623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.726654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.726661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.730294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.730335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.730343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.734012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.734042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.734050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.737694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.737724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.737731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.741451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.741475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.741482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.745259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.745286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.745293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.748924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.748952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.748959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.752570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.752597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.608 [2024-07-25 14:07:43.752604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.608 [2024-07-25 14:07:43.756258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.608 [2024-07-25 14:07:43.756285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.756292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.759952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.759980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.759986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.763578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.763604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.763611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.767332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.767360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.767367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.771076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.771105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.771112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.774795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.774824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.774831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.778482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.778509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.778517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.782213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.782242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.782249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.785957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.785987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.785994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.789637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.789665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.789672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.793192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.793219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.793226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.796871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.796897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.796905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.800504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.800532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.800539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.804147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.804176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.804183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.807894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.807923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.807930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.811528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.811556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.811564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.815234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.815263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.815270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.819039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.819066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.609 [2024-07-25 14:07:43.819073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.609 [2024-07-25 14:07:43.822703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.609 [2024-07-25 14:07:43.822730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.822737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.826560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.826592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.826600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.830399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.830428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.830435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.834323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.834355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.834363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.838340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.838372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.838380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.842419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.842449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.842457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.846501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.846533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.846542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.850548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.850580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.850588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.854659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.854691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.854699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.858725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.858767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.858775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.862657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.862690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.862698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.866615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.866648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.866656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.870589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.870622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.870630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.874445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.874473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.874480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.878151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.878179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.878187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.881953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.881983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.881990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.885780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.885809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.885816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.889656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.889685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.889693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.893495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.610 [2024-07-25 14:07:43.893527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.610 [2024-07-25 14:07:43.893535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.610 [2024-07-25 14:07:43.897344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.611 [2024-07-25 14:07:43.897408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.611 [2024-07-25 14:07:43.897415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.915 [2024-07-25 14:07:43.901272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.915 [2024-07-25 14:07:43.901311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.915 [2024-07-25 14:07:43.901319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.915 [2024-07-25 14:07:43.905127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.915 [2024-07-25 14:07:43.905156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.915 [2024-07-25 14:07:43.905163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.915 [2024-07-25 14:07:43.909042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.915 [2024-07-25 14:07:43.909071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.915 [2024-07-25 14:07:43.909078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.915 [2024-07-25 14:07:43.912864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.915 [2024-07-25 14:07:43.912893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.915 [2024-07-25 14:07:43.912901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.915 [2024-07-25 14:07:43.916683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.915 [2024-07-25 14:07:43.916728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.915 [2024-07-25 14:07:43.916736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.915 [2024-07-25 14:07:43.920512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.915 [2024-07-25 14:07:43.920541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.915 [2024-07-25 14:07:43.920548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.915 [2024-07-25 14:07:43.924289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.915 [2024-07-25 14:07:43.924330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.915 [2024-07-25 14:07:43.924338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.915 [2024-07-25 14:07:43.928096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.915 [2024-07-25 14:07:43.928127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.915 [2024-07-25 14:07:43.928134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.915 [2024-07-25 14:07:43.931910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.915 [2024-07-25 14:07:43.931939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.915 [2024-07-25 14:07:43.931947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.915 [2024-07-25 14:07:43.935876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.915 [2024-07-25 14:07:43.935907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.915 [2024-07-25 14:07:43.935914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.939870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.939902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.939911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.944051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.944081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.944089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.948265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.948307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.948316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.952508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.952537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.952546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.956655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.956687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.956695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.960998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.961031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.961040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.965173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.965205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.965213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.969348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.969374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.969382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.973464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.973493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.973501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.977545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.977574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.977583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.981587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.981617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.981625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.985584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.985615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.985623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.989462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.989489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.989496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.993313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.993339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.993346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:43.997192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:43.997220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:43.997228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:44.001053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:44.001083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:44.001090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:44.004958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:44.004988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:44.004995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:44.008841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:44.008872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:44.008879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:44.012660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:44.012694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:44.012701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:44.016608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:44.016639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:44.016646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:44.020645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:44.020676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:44.020684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:44.024728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:44.024760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:44.024767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:44.028762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:44.028795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.916 [2024-07-25 14:07:44.028802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.916 [2024-07-25 14:07:44.032647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.916 [2024-07-25 14:07:44.032677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.032684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.036505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.036534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.036541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.040325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.040351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.040357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.044142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.044171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.044178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.047975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.048003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.048011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.051816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.051843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.051850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.055620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.055649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.055656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.059506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.059536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.059543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.063462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.063490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.063497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.067455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.067484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.067492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.071520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.071549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.071557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.075558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.075587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.075594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.079489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.079517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.079525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.083327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.083355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.083362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.086992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.087023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.087030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.090724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.090754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.090761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.094465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.094493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.094500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.098175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.098204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.098211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.101864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.101894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.101901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.105481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.105510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.105523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.109206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.109234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.109242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.113076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.113104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.113111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.116964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.116993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.117000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.120884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.120912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.120919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.917 [2024-07-25 14:07:44.124666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.917 [2024-07-25 14:07:44.124694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.917 [2024-07-25 14:07:44.124702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.128420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.128448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.128455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.132219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.132250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.132257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.135918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.135947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.135955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.139616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.139645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.139653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.143371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.143392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.143398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.147072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.147112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.147119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.150791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.150818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.150825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.154537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.154569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.154576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.158314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.158343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.158351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.162220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.162252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.162260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.166293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.166334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.166343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.170400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.170431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.170439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.174426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.174456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.174464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.178464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.178493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.178501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.182461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.182490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.182499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.186363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.186391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.186399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.190266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.190312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.190321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.194242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.194273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.194281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.198257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.198289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.198305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.202289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.202330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.202339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.206390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.206420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.206428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.210394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.210424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.918 [2024-07-25 14:07:44.210432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.918 [2024-07-25 14:07:44.214386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:34.918 [2024-07-25 14:07:44.214416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.919 [2024-07-25 14:07:44.214424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.180 [2024-07-25 14:07:44.218424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.180 [2024-07-25 14:07:44.218454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.180 [2024-07-25 14:07:44.218463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.222495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.222529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.222537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.226447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.226478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.226487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.230328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.230356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.230363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.234120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.234150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.234158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.237996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.238026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.238034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.241809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.241840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.241847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.245543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.245572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.245580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.249389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.249418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.249426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.253087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.253116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.253124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.256805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.256833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.256840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.260493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.260521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.260529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.264230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.264261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.264269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.268032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.268060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.268067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.271936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.271964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.271972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.275837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.275865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.275872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.279571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.279598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.279605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.283389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.283415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.283422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.287082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.287110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.287117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.290951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.290978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.290985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.294672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.294700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.294720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.298585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.298615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.298622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.302402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.302430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.302437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.306149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.306180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.306188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.309953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.309984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.309991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.313603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.313632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.313639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.181 [2024-07-25 14:07:44.317215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.181 [2024-07-25 14:07:44.317242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.181 [2024-07-25 14:07:44.317249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.320912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.320938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.320945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.324734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.324765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.324772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.328401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.328429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.328436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.332094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.332122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.332128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.335800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.335827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.335833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.339652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.339680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.339687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.343438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.343466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.343473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.347132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.347163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.347170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.350945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.350975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.350983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.354613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.354643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.354650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.358257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.358288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.358309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.362082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.362111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.362118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.365782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.365812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.365820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.369490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.369523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.369530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.373183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.373212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.373219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.376876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.376905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.376912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.380484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.380510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.380517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.384273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.384313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.384321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.387954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.387982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.387989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.391655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.391682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.391689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.395294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.395331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.395338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.399063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.399093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.399101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.403132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.403162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.403169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.407094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.407123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.407130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.410920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.410948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.410955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.414718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.182 [2024-07-25 14:07:44.414746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.182 [2024-07-25 14:07:44.414753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.182 [2024-07-25 14:07:44.418630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.418660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.418667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.422370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.422398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.422405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.426091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.426121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.426128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.429838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.429868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.429876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.433549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.433578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.433585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.437219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.437246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.437253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.441072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.441099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.441106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.444873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.444901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.444908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.448693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.448721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.448728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.452608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.452636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.452643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.456482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.456509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.456516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.460179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.460207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.460214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.464002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.464030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.464037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.467875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.467904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.467912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.471745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.471774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.471781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.475601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.475629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.475637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.479432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.479461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.479468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.183 [2024-07-25 14:07:44.483229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.183 [2024-07-25 14:07:44.483260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.183 [2024-07-25 14:07:44.483267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.486925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.486956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.486963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.490668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.490699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.490706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.494389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.494413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.494420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.498110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.498140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.498147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.501896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.501927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.501934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.505628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.505659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.505666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.509371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.509398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.509406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.513225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.513255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.513261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.517125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.517155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.517162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.521012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.521121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.521155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.524948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.525020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.525030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.528763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.528799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.528806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.532613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.445 [2024-07-25 14:07:44.532649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.445 [2024-07-25 14:07:44.532656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.445 [2024-07-25 14:07:44.536471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.536503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.536511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.540370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.540392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.540399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.544306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.544328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.544335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.548137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.548168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.548176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.552178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.552209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.552217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.555988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.556017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.556025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.559891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.559921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.559928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.563813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.563842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.563849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.567717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.567748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.567755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.571534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.571562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.571569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.575634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.575662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.575669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.579484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.579513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.579520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.583633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.583663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.583671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.587801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.587831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.587839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.591791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.591821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.591829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.595943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.595977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.595985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.600199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.600231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.600239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.604437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.604467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.604475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.608613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.608645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.608653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.612814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.612846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.612854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.617027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.617075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.617084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.621266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.621312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.621321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.625410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.625439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.625448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.629648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.629677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.629685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.633817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.446 [2024-07-25 14:07:44.633848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.446 [2024-07-25 14:07:44.633856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.446 [2024-07-25 14:07:44.638016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.638048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.638056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.642219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.642251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.642259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.646284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.646321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.646329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.650476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.650506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.650515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.654488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.654517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.654525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.658445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.658475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.658483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.662559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.662591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.662599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.666431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.666463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.666470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.670367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.670401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.670410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.674250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.674280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.674288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.678049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.678079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.678086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.681878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.681908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.681915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.685697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.685728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.685736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.689508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.689544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.689551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.693248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.693279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.693286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.697056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.697086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.697093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.700839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.700869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.700876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.704629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.704659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.704666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.708433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.708458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.708465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.712271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.712309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.712317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.716108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.716143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.716151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.719931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.719959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.719966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.723752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.723781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.723788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.447 [2024-07-25 14:07:44.727533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.447 [2024-07-25 14:07:44.727561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.447 [2024-07-25 14:07:44.727567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.448 [2024-07-25 14:07:44.731241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.448 [2024-07-25 14:07:44.731268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.448 [2024-07-25 14:07:44.731274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.448 [2024-07-25 14:07:44.735050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.448 [2024-07-25 14:07:44.735077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.448 [2024-07-25 14:07:44.735084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.448 [2024-07-25 14:07:44.738744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.448 [2024-07-25 14:07:44.738771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.448 [2024-07-25 14:07:44.738778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.448 [2024-07-25 14:07:44.742463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.448 [2024-07-25 14:07:44.742494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.448 [2024-07-25 14:07:44.742502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.448 [2024-07-25 14:07:44.746240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.448 [2024-07-25 14:07:44.746271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.448 [2024-07-25 14:07:44.746279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.750016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.750047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.750054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.753773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.753802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.753809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.757456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.757481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.757488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.761209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.761236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.761243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.764990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.765018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.765026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.768589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.768615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.768622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.772235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.772264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.772271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.776043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.776072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.776079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.779863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.779891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.779898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.783774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.783803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.783810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.787539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.787566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.787572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.791283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.791321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.791330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.795098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.795127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.795134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.798802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.798833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.798841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.802582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.802613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.802621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.806582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.806612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.806621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.810552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.810582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.709 [2024-07-25 14:07:44.810590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.709 [2024-07-25 14:07:44.814508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.709 [2024-07-25 14:07:44.814538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.814546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.818445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.818474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.818482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.822318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.822341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.822348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.826149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.826181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.826189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.830057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.830087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.830094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.833956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.833988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.833996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.837806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.837835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.837843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.841596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.841623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.841630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.845444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.845469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.845477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.849223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.849253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.849261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.853049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.853077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.853084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.856917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.856946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.856954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.860921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.860952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.860960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.865074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.865105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.865114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.869210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.869242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.869250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.873317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.873347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.873355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.877348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.877378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.877386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.881286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.881323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.881331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.885147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.885175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.885182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.888992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.889020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.889028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.892817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.892845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.892852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.896753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.896781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.896788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.900611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.900639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.900646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.904440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.904468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.904476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.908310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.908336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.908343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.912184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.912214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.912221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.710 [2024-07-25 14:07:44.916135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.710 [2024-07-25 14:07:44.916165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.710 [2024-07-25 14:07:44.916172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.919968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.919999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.920006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.923846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.923876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.923884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.927687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.927716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.927723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.931464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.931491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.931498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.935167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.935194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.935201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.938997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.939025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.939032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.942663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.942691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.942699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.946425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.946452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.946460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.950227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.950261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.950268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.953921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.953951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.953959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.957725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.957754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.957761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.961568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.961597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.961603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.965545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.965575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.965583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.969596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.969627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.969636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.973666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.973697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.973705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.977782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.977813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.977822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.981698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.981729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.981738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.985751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.985782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.985790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.989737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.989769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.989777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.993815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.993848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.993857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:44.997636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:44.997665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:44.997672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:45.001504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:45.001540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:45.001547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:45.005277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:45.005316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:45.005324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.711 [2024-07-25 14:07:45.009043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.711 [2024-07-25 14:07:45.009072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.711 [2024-07-25 14:07:45.009079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.972 [2024-07-25 14:07:45.012827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.012857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.012864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.016630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.016659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.016666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.020456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.020484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.020491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.024339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.024364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.024371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.028291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.028326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.028335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.032210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.032238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.032245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.035967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.035997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.036005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.039840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.039868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.039876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.043642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.043669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.043675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.047426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.047454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.047462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.051324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.051352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.051359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.055273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.055317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.055325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.059255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.059284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.059291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.063168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.063199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.063206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.067115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.067146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.067153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.071035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.071065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.071072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.074985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.075015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.075023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.078733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.078763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.078770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.082588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.082619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.082627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.086498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.086528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.086536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.090414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.090446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.090454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.094419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.094447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.094455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.098355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.098381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.098388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.102466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.102498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.102507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.973 [2024-07-25 14:07:45.106544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.973 [2024-07-25 14:07:45.106575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.973 [2024-07-25 14:07:45.106584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.110630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.110662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.110671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.114820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.114851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.114859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.119008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.119041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.119049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.122984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.123014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.123022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.126988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.127019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.127027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.130910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.130941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.130948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.134828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.134858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.134865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.138693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.138724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.138732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.142555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.142587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.142596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.146414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.146443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.146451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.150301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.150335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.150343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.154159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.154190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.154198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.158059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.158090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.158098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.162009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.162041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.162049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.166030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.166063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.166071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.169947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.169979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.169987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.173824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.173856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.173864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.177692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.177724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.177732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.181516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.181567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.181576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.185376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.185401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.185408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.974 [2024-07-25 14:07:45.189165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.974 [2024-07-25 14:07:45.189194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.974 [2024-07-25 14:07:45.189201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.193129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.193159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.193167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.197042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.197070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.197078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.200886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.200914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.200921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.204729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.204757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.204764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.208595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.208622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.208629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.212401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.212428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.212436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.216315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.216344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.216351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.220249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.220280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.220288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.224206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.224235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.224242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.228065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.228096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.228103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.231886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.231917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.231924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.235593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.235622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.235629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.239383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.239407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.239414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.243087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.243119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.243126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.246805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.246836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.246844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.250511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.250542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.250549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.254278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.254321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.254329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.258100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.258129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.258136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.261891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.261921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.261928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.265628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.265657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.265664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.269360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.269388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.269395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.975 [2024-07-25 14:07:45.273116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:35.975 [2024-07-25 14:07:45.273145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.975 [2024-07-25 14:07:45.273153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.234 [2024-07-25 14:07:45.276907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.234 [2024-07-25 14:07:45.276936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.234 [2024-07-25 14:07:45.276943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.234 [2024-07-25 14:07:45.280730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.234 [2024-07-25 14:07:45.280760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.234 [2024-07-25 14:07:45.280767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.234 [2024-07-25 14:07:45.284470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.234 [2024-07-25 14:07:45.284516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.234 [2024-07-25 14:07:45.284524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.234 [2024-07-25 14:07:45.288275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.234 [2024-07-25 14:07:45.288313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.234 [2024-07-25 14:07:45.288320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.234 [2024-07-25 14:07:45.292033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.234 [2024-07-25 14:07:45.292061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.234 [2024-07-25 14:07:45.292068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.234 [2024-07-25 14:07:45.295851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.234 [2024-07-25 14:07:45.295878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.234 [2024-07-25 14:07:45.295885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.234 [2024-07-25 14:07:45.299709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.234 [2024-07-25 14:07:45.299735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.234 [2024-07-25 14:07:45.299742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.234 [2024-07-25 14:07:45.303524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.234 [2024-07-25 14:07:45.303551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.234 [2024-07-25 14:07:45.303558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.234 [2024-07-25 14:07:45.307318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.234 [2024-07-25 14:07:45.307344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.234 [2024-07-25 14:07:45.307351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.234 [2024-07-25 14:07:45.311296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.234 [2024-07-25 14:07:45.311330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.234 [2024-07-25 14:07:45.311338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.234 [2024-07-25 14:07:45.315214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.234 [2024-07-25 14:07:45.315243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.235 [2024-07-25 14:07:45.315250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.235 [2024-07-25 14:07:45.319352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.235 [2024-07-25 14:07:45.319381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.235 [2024-07-25 14:07:45.319389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.235 [2024-07-25 14:07:45.323429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.235 [2024-07-25 14:07:45.323457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.235 [2024-07-25 14:07:45.323466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.235 [2024-07-25 14:07:45.327463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.235 [2024-07-25 14:07:45.327491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.235 [2024-07-25 14:07:45.327499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.235 [2024-07-25 14:07:45.331520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.235 [2024-07-25 14:07:45.331549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.235 [2024-07-25 14:07:45.331558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.235 [2024-07-25 14:07:45.335635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cb5200) 00:19:36.235 [2024-07-25 14:07:45.335665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.235 [2024-07-25 14:07:45.335673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.235 00:19:36.235 Latency(us) 00:19:36.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.235 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:36.235 nvme0n1 : 2.00 7998.25 999.78 0.00 0.00 1997.79 1709.95 4349.99 00:19:36.235 =================================================================================================================== 00:19:36.235 Total : 7998.25 999.78 0.00 0.00 1997.79 1709.95 4349.99 00:19:36.235 0 00:19:36.235 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:36.235 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:36.235 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:36.235 | .driver_specific 00:19:36.235 | .nvme_error 00:19:36.235 | .status_code 00:19:36.235 | .command_transient_transport_error' 00:19:36.235 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:36.492 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 516 > 0 )) 00:19:36.492 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79650 00:19:36.492 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79650 ']' 00:19:36.492 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79650 00:19:36.492 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:19:36.492 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:36.492 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79650 00:19:36.492 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:36.492 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:36.492 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79650' 00:19:36.492 killing process with pid 79650 00:19:36.492 Received shutdown signal, test time was about 2.000000 seconds 00:19:36.492 00:19:36.492 Latency(us) 00:19:36.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.492 =================================================================================================================== 00:19:36.492 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.492 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79650 00:19:36.492 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79650 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79709 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79709 /var/tmp/bperf.sock 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79709 ']' 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:36.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.751 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:36.751 [2024-07-25 14:07:45.854748] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:19:36.751 [2024-07-25 14:07:45.854897] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79709 ] 00:19:36.751 [2024-07-25 14:07:45.991553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.008 [2024-07-25 14:07:46.089865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.008 [2024-07-25 14:07:46.130427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:37.574 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.574 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:19:37.574 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:37.574 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:37.833 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:37.833 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.833 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:37.833 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.833 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:37.833 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:38.090 nvme0n1 00:19:38.090 14:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:38.091 14:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.091 14:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:38.091 14:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.091 14:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:38.091 14:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:38.091 Running I/O for 2 seconds... 00:19:38.348 [2024-07-25 14:07:47.411516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fef90 00:19:38.348 [2024-07-25 14:07:47.413936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.348 [2024-07-25 14:07:47.414058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.348 [2024-07-25 14:07:47.425445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190feb58 00:19:38.348 [2024-07-25 14:07:47.427704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.348 [2024-07-25 14:07:47.427783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:38.348 [2024-07-25 14:07:47.439014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fe2e8 00:19:38.348 [2024-07-25 14:07:47.441319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.348 [2024-07-25 14:07:47.441393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:38.348 [2024-07-25 14:07:47.452800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fda78 00:19:38.348 [2024-07-25 14:07:47.455016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.348 [2024-07-25 14:07:47.455100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:38.348 [2024-07-25 14:07:47.466060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fd208 00:19:38.348 [2024-07-25 14:07:47.468062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.348 [2024-07-25 14:07:47.468136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:38.348 [2024-07-25 14:07:47.479483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fc998 00:19:38.348 [2024-07-25 14:07:47.481798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.348 [2024-07-25 14:07:47.481887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:38.348 [2024-07-25 14:07:47.494059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fc128 00:19:38.348 [2024-07-25 14:07:47.496324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.348 [2024-07-25 14:07:47.496409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:38.348 [2024-07-25 14:07:47.507987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fb8b8 00:19:38.348 [2024-07-25 14:07:47.510226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.348 [2024-07-25 14:07:47.510259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:38.348 [2024-07-25 14:07:47.521602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fb048 00:19:38.348 [2024-07-25 14:07:47.523733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.348 [2024-07-25 14:07:47.523764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:38.348 [2024-07-25 14:07:47.535120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fa7d8 00:19:38.348 [2024-07-25 14:07:47.537179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.349 [2024-07-25 14:07:47.537211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:38.349 [2024-07-25 14:07:47.548583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f9f68 00:19:38.349 [2024-07-25 14:07:47.550732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.349 [2024-07-25 14:07:47.550762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:38.349 [2024-07-25 14:07:47.562161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f96f8 00:19:38.349 [2024-07-25 14:07:47.564248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.349 [2024-07-25 14:07:47.564280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:38.349 [2024-07-25 14:07:47.576018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f8e88 00:19:38.349 [2024-07-25 14:07:47.578133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.349 [2024-07-25 14:07:47.578165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:38.349 [2024-07-25 14:07:47.590359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f8618 00:19:38.349 [2024-07-25 14:07:47.592399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.349 [2024-07-25 14:07:47.592430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:38.349 [2024-07-25 14:07:47.604668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f7da8 00:19:38.349 [2024-07-25 14:07:47.606706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.349 [2024-07-25 14:07:47.606739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:38.349 [2024-07-25 14:07:47.618735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f7538 00:19:38.349 [2024-07-25 14:07:47.620740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.349 [2024-07-25 14:07:47.620775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:38.349 [2024-07-25 14:07:47.632375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f6cc8 00:19:38.349 [2024-07-25 14:07:47.634368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.349 [2024-07-25 14:07:47.634400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.349 [2024-07-25 14:07:47.646189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f6458 00:19:38.349 [2024-07-25 14:07:47.648164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.349 [2024-07-25 14:07:47.648196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.660323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f5be8 00:19:38.623 [2024-07-25 14:07:47.662384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.662417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.675427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f5378 00:19:38.623 [2024-07-25 14:07:47.677491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.677529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.690706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f4b08 00:19:38.623 [2024-07-25 14:07:47.692763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.692796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.705820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f4298 00:19:38.623 [2024-07-25 14:07:47.707862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.707896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.720382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f3a28 00:19:38.623 [2024-07-25 14:07:47.722264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.722303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.734249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f31b8 00:19:38.623 [2024-07-25 14:07:47.736115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.736146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.748395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f2948 00:19:38.623 [2024-07-25 14:07:47.750237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.750266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.761995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f20d8 00:19:38.623 [2024-07-25 14:07:47.763833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.763863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.776155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f1868 00:19:38.623 [2024-07-25 14:07:47.777977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.778006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.789812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f0ff8 00:19:38.623 [2024-07-25 14:07:47.791596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.791626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.802937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f0788 00:19:38.623 [2024-07-25 14:07:47.804669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.804696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.815882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190eff18 00:19:38.623 [2024-07-25 14:07:47.817433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.817459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.828365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ef6a8 00:19:38.623 [2024-07-25 14:07:47.829964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.829993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.841692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190eee38 00:19:38.623 [2024-07-25 14:07:47.843357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.843388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.854466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ee5c8 00:19:38.623 [2024-07-25 14:07:47.855953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.855981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.867073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190edd58 00:19:38.623 [2024-07-25 14:07:47.868642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.868672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.880710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ed4e8 00:19:38.623 [2024-07-25 14:07:47.882473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.623 [2024-07-25 14:07:47.882507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:38.623 [2024-07-25 14:07:47.895085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ecc78 00:19:38.624 [2024-07-25 14:07:47.896753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.624 [2024-07-25 14:07:47.896783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:38.624 [2024-07-25 14:07:47.907974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ec408 00:19:38.624 [2024-07-25 14:07:47.909414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.624 [2024-07-25 14:07:47.909440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:47.921697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ebb98 00:19:38.882 [2024-07-25 14:07:47.923309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:47.923337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:47.935642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190eb328 00:19:38.882 [2024-07-25 14:07:47.937222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:47.937252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:47.949774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190eaab8 00:19:38.882 [2024-07-25 14:07:47.951434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:47.951465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:47.964138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ea248 00:19:38.882 [2024-07-25 14:07:47.965708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:47.965739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:47.977632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e99d8 00:19:38.882 [2024-07-25 14:07:47.979178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:47.979212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:47.991432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e9168 00:19:38.882 [2024-07-25 14:07:47.992964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:47.992998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.006209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e88f8 00:19:38.882 [2024-07-25 14:07:48.007767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.007799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.020849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e8088 00:19:38.882 [2024-07-25 14:07:48.022374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.022406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.034723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e7818 00:19:38.882 [2024-07-25 14:07:48.036208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.036242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.048871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e6fa8 00:19:38.882 [2024-07-25 14:07:48.050430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.050463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.063321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e6738 00:19:38.882 [2024-07-25 14:07:48.064758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.064784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.077858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e5ec8 00:19:38.882 [2024-07-25 14:07:48.079286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.079320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.091262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e5658 00:19:38.882 [2024-07-25 14:07:48.092662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.092692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.104620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e4de8 00:19:38.882 [2024-07-25 14:07:48.106017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.106050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.118133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e4578 00:19:38.882 [2024-07-25 14:07:48.119430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.119455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.131350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e3d08 00:19:38.882 [2024-07-25 14:07:48.132537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.132564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.144182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e3498 00:19:38.882 [2024-07-25 14:07:48.145371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.145397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.157001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e2c28 00:19:38.882 [2024-07-25 14:07:48.158254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.158283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.169870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e23b8 00:19:38.882 [2024-07-25 14:07:48.171007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.171033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:38.882 [2024-07-25 14:07:48.182624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e1b48 00:19:38.882 [2024-07-25 14:07:48.183841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.882 [2024-07-25 14:07:48.183874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.196643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e12d8 00:19:39.141 [2024-07-25 14:07:48.197952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.197984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.210371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e0a68 00:19:39.141 [2024-07-25 14:07:48.211499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.211529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.223022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e01f8 00:19:39.141 [2024-07-25 14:07:48.224199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.224230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.235686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190df988 00:19:39.141 [2024-07-25 14:07:48.236750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.236779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.248230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190df118 00:19:39.141 [2024-07-25 14:07:48.249404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.249436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.261051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190de8a8 00:19:39.141 [2024-07-25 14:07:48.262153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.262188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.273897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190de038 00:19:39.141 [2024-07-25 14:07:48.275000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.275035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.291997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190de038 00:19:39.141 [2024-07-25 14:07:48.294169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.294203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.305013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190de8a8 00:19:39.141 [2024-07-25 14:07:48.307262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.307292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.318632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190df118 00:19:39.141 [2024-07-25 14:07:48.320621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.320646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.331172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190df988 00:19:39.141 [2024-07-25 14:07:48.333335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.333364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.344315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e01f8 00:19:39.141 [2024-07-25 14:07:48.346455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.346484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.357240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e0a68 00:19:39.141 [2024-07-25 14:07:48.359213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.359239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.369890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e12d8 00:19:39.141 [2024-07-25 14:07:48.371784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.371810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.382496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e1b48 00:19:39.141 [2024-07-25 14:07:48.384381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.384410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.395497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e23b8 00:19:39.141 [2024-07-25 14:07:48.397598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.397630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.408558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e2c28 00:19:39.141 [2024-07-25 14:07:48.410413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.410441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.421085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e3498 00:19:39.141 [2024-07-25 14:07:48.422952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.422981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:39.141 [2024-07-25 14:07:48.433739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e3d08 00:19:39.141 [2024-07-25 14:07:48.435562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.141 [2024-07-25 14:07:48.435588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:39.399 [2024-07-25 14:07:48.446100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e4578 00:19:39.399 [2024-07-25 14:07:48.447909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.399 [2024-07-25 14:07:48.447935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:39.399 [2024-07-25 14:07:48.458397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e4de8 00:19:39.399 [2024-07-25 14:07:48.460191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.399 [2024-07-25 14:07:48.460220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:39.399 [2024-07-25 14:07:48.470673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e5658 00:19:39.400 [2024-07-25 14:07:48.472456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.472484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.482985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e5ec8 00:19:39.400 [2024-07-25 14:07:48.484767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.484794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.495384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e6738 00:19:39.400 [2024-07-25 14:07:48.497116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.497144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.508134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e6fa8 00:19:39.400 [2024-07-25 14:07:48.510047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.510087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.520843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e7818 00:19:39.400 [2024-07-25 14:07:48.522736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.522763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.533788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e8088 00:19:39.400 [2024-07-25 14:07:48.535542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.535569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.546336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e88f8 00:19:39.400 [2024-07-25 14:07:48.548006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.548033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.559052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e9168 00:19:39.400 [2024-07-25 14:07:48.560787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.560861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.571615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190e99d8 00:19:39.400 [2024-07-25 14:07:48.573327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.573413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.584141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ea248 00:19:39.400 [2024-07-25 14:07:48.585876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.585950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.597009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190eaab8 00:19:39.400 [2024-07-25 14:07:48.598920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.599005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.610980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190eb328 00:19:39.400 [2024-07-25 14:07:48.612853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.612932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.623955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ebb98 00:19:39.400 [2024-07-25 14:07:48.625598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.625668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.636538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ec408 00:19:39.400 [2024-07-25 14:07:48.638196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.638269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.649214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ecc78 00:19:39.400 [2024-07-25 14:07:48.650846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.650934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.661706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ed4e8 00:19:39.400 [2024-07-25 14:07:48.663307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.663378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.674295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190edd58 00:19:39.400 [2024-07-25 14:07:48.675887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.675975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.686833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ee5c8 00:19:39.400 [2024-07-25 14:07:48.688585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.688664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:39.400 [2024-07-25 14:07:48.700187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190eee38 00:19:39.400 [2024-07-25 14:07:48.701942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.400 [2024-07-25 14:07:48.701974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:39.660 [2024-07-25 14:07:48.714365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190ef6a8 00:19:39.660 [2024-07-25 14:07:48.716034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.660 [2024-07-25 14:07:48.716064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:39.660 [2024-07-25 14:07:48.728236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190eff18 00:19:39.660 [2024-07-25 14:07:48.729907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.660 [2024-07-25 14:07:48.729937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:39.660 [2024-07-25 14:07:48.742195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f0788 00:19:39.660 [2024-07-25 14:07:48.743854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.660 [2024-07-25 14:07:48.743883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:39.660 [2024-07-25 14:07:48.755854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f0ff8 00:19:39.660 [2024-07-25 14:07:48.757467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.660 [2024-07-25 14:07:48.757505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:39.660 [2024-07-25 14:07:48.768858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f1868 00:19:39.660 [2024-07-25 14:07:48.770336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.660 [2024-07-25 14:07:48.770366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:39.660 [2024-07-25 14:07:48.781400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f20d8 00:19:39.660 [2024-07-25 14:07:48.782827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.660 [2024-07-25 14:07:48.782857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:39.660 [2024-07-25 14:07:48.794053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f2948 00:19:39.660 [2024-07-25 14:07:48.795502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.795531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.807590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f31b8 00:19:39.661 [2024-07-25 14:07:48.809151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.809185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.820625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f3a28 00:19:39.661 [2024-07-25 14:07:48.822100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.822131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.834091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f4298 00:19:39.661 [2024-07-25 14:07:48.835612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.835641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.848015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f4b08 00:19:39.661 [2024-07-25 14:07:48.849542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.849570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.861105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f5378 00:19:39.661 [2024-07-25 14:07:48.862535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.862563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.873836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f5be8 00:19:39.661 [2024-07-25 14:07:48.875148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.875175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.886363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f6458 00:19:39.661 [2024-07-25 14:07:48.887692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.887720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.898842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f6cc8 00:19:39.661 [2024-07-25 14:07:48.900093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.900119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.911621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f7538 00:19:39.661 [2024-07-25 14:07:48.912968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.912996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.924416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f7da8 00:19:39.661 [2024-07-25 14:07:48.925778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.925806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.936888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f8618 00:19:39.661 [2024-07-25 14:07:48.938079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.938104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.949183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f8e88 00:19:39.661 [2024-07-25 14:07:48.950556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.950583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:39.661 [2024-07-25 14:07:48.962890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f96f8 00:19:39.661 [2024-07-25 14:07:48.964219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.661 [2024-07-25 14:07:48.964247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:48.977133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f9f68 00:19:39.921 [2024-07-25 14:07:48.978535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:48.978564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:48.991130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fa7d8 00:19:39.921 [2024-07-25 14:07:48.992361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:48.992385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.003525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fb048 00:19:39.921 [2024-07-25 14:07:49.004706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.004733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.016263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fb8b8 00:19:39.921 [2024-07-25 14:07:49.017509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.017542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.028921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fc128 00:19:39.921 [2024-07-25 14:07:49.030100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.030129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.041703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fc998 00:19:39.921 [2024-07-25 14:07:49.042785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.042811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.054546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fd208 00:19:39.921 [2024-07-25 14:07:49.055757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.055784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.067945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fda78 00:19:39.921 [2024-07-25 14:07:49.069109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.069137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.081752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fe2e8 00:19:39.921 [2024-07-25 14:07:49.083001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.083029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.096676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190feb58 00:19:39.921 [2024-07-25 14:07:49.097920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.097951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.117352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fef90 00:19:39.921 [2024-07-25 14:07:49.119745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.119775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.131698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190feb58 00:19:39.921 [2024-07-25 14:07:49.134080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.134110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.146391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fe2e8 00:19:39.921 [2024-07-25 14:07:49.148825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.148852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.161587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fda78 00:19:39.921 [2024-07-25 14:07:49.163949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.163980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.176620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fd208 00:19:39.921 [2024-07-25 14:07:49.178990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.179020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.191518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fc998 00:19:39.921 [2024-07-25 14:07:49.193851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.193882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.206262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fc128 00:19:39.921 [2024-07-25 14:07:49.208500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.208535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:39.921 [2024-07-25 14:07:49.221980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fb8b8 00:19:39.921 [2024-07-25 14:07:49.224314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.921 [2024-07-25 14:07:49.224353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:40.181 [2024-07-25 14:07:49.237422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fb048 00:19:40.181 [2024-07-25 14:07:49.239748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.181 [2024-07-25 14:07:49.239788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:40.181 [2024-07-25 14:07:49.252429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190fa7d8 00:19:40.181 [2024-07-25 14:07:49.254673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.181 [2024-07-25 14:07:49.254712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:40.181 [2024-07-25 14:07:49.267693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f9f68 00:19:40.181 [2024-07-25 14:07:49.269932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.181 [2024-07-25 14:07:49.269985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:40.181 [2024-07-25 14:07:49.283060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f96f8 00:19:40.181 [2024-07-25 14:07:49.285289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.181 [2024-07-25 14:07:49.285333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:40.181 [2024-07-25 14:07:49.298056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f8e88 00:19:40.181 [2024-07-25 14:07:49.300251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.181 [2024-07-25 14:07:49.300316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:40.181 [2024-07-25 14:07:49.313022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f8618 00:19:40.181 [2024-07-25 14:07:49.315217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.181 [2024-07-25 14:07:49.315258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:40.181 [2024-07-25 14:07:49.328343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f7da8 00:19:40.181 [2024-07-25 14:07:49.330546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.181 [2024-07-25 14:07:49.330591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:40.181 [2024-07-25 14:07:49.343731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f7538 00:19:40.181 [2024-07-25 14:07:49.345920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.181 [2024-07-25 14:07:49.345964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:40.181 [2024-07-25 14:07:49.359030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f6cc8 00:19:40.181 [2024-07-25 14:07:49.361193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.181 [2024-07-25 14:07:49.361232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:40.181 [2024-07-25 14:07:49.374063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f6458 00:19:40.181 [2024-07-25 14:07:49.376211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.181 [2024-07-25 14:07:49.376251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:40.181 [2024-07-25 14:07:49.389219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61e650) with pdu=0x2000190f5be8 00:19:40.181 [2024-07-25 14:07:49.391378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.181 [2024-07-25 14:07:49.391419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:40.181 00:19:40.181 Latency(us) 00:19:40.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.181 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:40.181 nvme0n1 : 2.01 18610.04 72.70 0.00 0.00 6871.77 4779.26 27702.55 00:19:40.181 =================================================================================================================== 00:19:40.181 Total : 18610.04 72.70 0.00 0.00 6871.77 4779.26 27702.55 00:19:40.181 0 00:19:40.181 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:40.181 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:40.181 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:40.181 | .driver_specific 00:19:40.181 | .nvme_error 00:19:40.181 | .status_code 00:19:40.181 | .command_transient_transport_error' 00:19:40.181 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:40.440 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:19:40.440 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79709 00:19:40.440 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79709 ']' 00:19:40.441 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79709 00:19:40.441 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:19:40.441 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:40.441 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79709 00:19:40.441 killing process with pid 79709 00:19:40.441 Received shutdown signal, test time was about 2.000000 seconds 00:19:40.441 00:19:40.441 Latency(us) 00:19:40.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.441 =================================================================================================================== 00:19:40.441 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.441 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:40.441 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:40.441 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79709' 00:19:40.441 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79709 00:19:40.441 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79709 00:19:40.700 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79765 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79765 /var/tmp/bperf.sock 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79765 ']' 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:40.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.701 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:40.701 [2024-07-25 14:07:49.916183] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:19:40.701 [2024-07-25 14:07:49.916383] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:19:40.701 Zero copy mechanism will not be used. 00:19:40.701 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79765 ] 00:19:40.960 [2024-07-25 14:07:50.053745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.960 [2024-07-25 14:07:50.158343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.960 [2024-07-25 14:07:50.203648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:41.529 14:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.529 14:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:19:41.529 14:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:41.529 14:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:41.789 14:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:41.789 14:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.789 14:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:42.048 14:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.048 14:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:42.048 14:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:42.048 nvme0n1 00:19:42.313 14:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:42.313 14:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.313 14:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:42.313 14:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.313 14:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:42.313 14:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:42.313 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:42.313 Zero copy mechanism will not be used. 00:19:42.313 Running I/O for 2 seconds... 00:19:42.314 [2024-07-25 14:07:51.473460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.473931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.473958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.477527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.477640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.477663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.481698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.481769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.481792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.485883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.485958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.485981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.490022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.490089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.490113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.494260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.494351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.494374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.498253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.498333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.498371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.502296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.502414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.502454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.506627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.506761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.506783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.510423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.510730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.510765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.514552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.514624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.514647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.518776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.518845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.518867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.523173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.523250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.523277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.527661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.527760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.527785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.532163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.532233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.532259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.536503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.536602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.536624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.540737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.540886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.540913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.544880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.545030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.545056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.314 [2024-07-25 14:07:51.548523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.314 [2024-07-25 14:07:51.548850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.314 [2024-07-25 14:07:51.548886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.552647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.552720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.552744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.557025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.557100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.557123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.561387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.561451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.561476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.565383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.565444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.565467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.569542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.569615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.569640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.573860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.573934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.573959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.578051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.578134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.578158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.582285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.582463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.582490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.586920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.586997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.587028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.590842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.591265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.591314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.595087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.595180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.595205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.599440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.599498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.599523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.603720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.603783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.603805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.607908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.607971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.607993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.612012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.612078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.612099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.315 [2024-07-25 14:07:51.616150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.315 [2024-07-25 14:07:51.616270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.315 [2024-07-25 14:07:51.616309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.620535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.620610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.620630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.624979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.625131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.625158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.629347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.629415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.629437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.633738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.633836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.633864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.637943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.638004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.638024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.641568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.641951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.641982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.645560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.645642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.645660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.649569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.649632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.649651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.653474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.653534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.653551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.657433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.657495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.657515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.661713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.661774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.661795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.665958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.666019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.666040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.670227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.670304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.670336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.674639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.674780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.674801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.678476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.678791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.678816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.682382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.682444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.682464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.686460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.686528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.606 [2024-07-25 14:07:51.686548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.606 [2024-07-25 14:07:51.690541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.606 [2024-07-25 14:07:51.690599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.690617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.694574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.694635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.694655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.698716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.698824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.698842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.702854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.702982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.703001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.706898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.706968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.706986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.711273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.711397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.711415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.715136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.715465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.715488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.719128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.719198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.719218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.723292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.723373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.723394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.727409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.727478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.727496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.731470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.731549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.731566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.735530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.735587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.735607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.739720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.739791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.739812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.744084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.744176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.744202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.748648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.748751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.748775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.752245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.752319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.752342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.756730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.756795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.756820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.761136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.761201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.761226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.765571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.765654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.765680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.769845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.769914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.769938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.774154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.774226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.774250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.778487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.778577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.778601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.782866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.782952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.782975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.786712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.787127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.787159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.790984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.791071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.791095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.795375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.795433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.795456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.799702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.607 [2024-07-25 14:07:51.799761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.607 [2024-07-25 14:07:51.799782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.607 [2024-07-25 14:07:51.804044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.804125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.804147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.808329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.808458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.808478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.812504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.812596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.812616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.816666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.816821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.816840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.820453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.820781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.820806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.824436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.824488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.824507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.828454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.828502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.828522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.832509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.832574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.832592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.836746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.836821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.836841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.841016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.841071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.841091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.845131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.845253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.845272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.849319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.849380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.849398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.853446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.853606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.853625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.857176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.857492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.857514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.861059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.861116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.861134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.864931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.864977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.864995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.868785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.868836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.868854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.872729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.872800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.872819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.876609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.876703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.876721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.880693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.880830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.880850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.884918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.885010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.885032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.889139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.889322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.889344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.893306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.893489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.893510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.608 [2024-07-25 14:07:51.897646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.608 [2024-07-25 14:07:51.897807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.608 [2024-07-25 14:07:51.897826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.901966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.902123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.902142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.905716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.906048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.906071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.909573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.909632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.909652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.913708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.913772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.913793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.917774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.917842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.917864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.921950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.922011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.922031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.926137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.926195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.926217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.930484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.930541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.930565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.934707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.934798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.934820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.938894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.939022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.939043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.943123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.943267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.943311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.946900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.947237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.947270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.951333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.951396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.951421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.955586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.955651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.955674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.959759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.959827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.959850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.964061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.964129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.964152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.968469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.968548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.968573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.972859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.972956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.972984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.977085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.977262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.977285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.980991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.981351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.882 [2024-07-25 14:07:51.981385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.882 [2024-07-25 14:07:51.985031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.882 [2024-07-25 14:07:51.985093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:51.985116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:51.989133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:51.989192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:51.989214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:51.993431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:51.993493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:51.993515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:51.997751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:51.997822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:51.997846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.002175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.002255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.002278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.006447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.006518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.006541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.010615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.010697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.010719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.014756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.014931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.014959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.018595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.018930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.018966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.022703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.022791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.022814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.026935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.027010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.027051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.031115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.031183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.031209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.035386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.035454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.035479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.039654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.039732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.039755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.043831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.043921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.043942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.047938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.048105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.048123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.052049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.052132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.052153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.056266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.056410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.056430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.059804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.060077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.060118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.063769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.063826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.063844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.067862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.067914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.067934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.071912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.071966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.071985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.075989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.076068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.076088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.080138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.080192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.080212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.084141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.084254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.084273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.088256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.088349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.088369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.092254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.092322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.883 [2024-07-25 14:07:52.092341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.883 [2024-07-25 14:07:52.095887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.883 [2024-07-25 14:07:52.096293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.096333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.099952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.100054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.100076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.104109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.104171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.104195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.108329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.108405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.108428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.112573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.112644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.112665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.116830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.116895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.116917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.121055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.121179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.121199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.125376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.125461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.125481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.129753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.129842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.129862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.134134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.134226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.134247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.138428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.138489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.138511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.142167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.142609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.142646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.146492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.146578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.146601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.150918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.150984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.151007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.155214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.155299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.155335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.159561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.159629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.159652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.163959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.164081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.164109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.168797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.168889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.168923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.172803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.173188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.173219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.177144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.177237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.177257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.884 [2024-07-25 14:07:52.181290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:42.884 [2024-07-25 14:07:52.181373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.884 [2024-07-25 14:07:52.181393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.185816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.185902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.185926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.190661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.190748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.190777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.195260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.195373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.195398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.200099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.200189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.200218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.204791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.204929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.204956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.209477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.209596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.209622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.213919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.214088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.214112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.218525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.218717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.218750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.223154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.223346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.223370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.227202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.227579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.227606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.231503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.231568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.231589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.235640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.235722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.235743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.239993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.240060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.240081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.244451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.244518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.244539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.248873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.248934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.248956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.253200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.253289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.144 [2024-07-25 14:07:52.253309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.144 [2024-07-25 14:07:52.257564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.144 [2024-07-25 14:07:52.257670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.257693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.261939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.262004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.262025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.265724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.266136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.266166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.269842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.269934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.269954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.274058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.274123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.274145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.278292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.278377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.278398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.282676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.282743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.282766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.286833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.286919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.286942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.291032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.291137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.291159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.295630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.295735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.295759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.300050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.300196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.300219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.304029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.304357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.304384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.308237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.308303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.308338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.312642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.312724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.312746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.316975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.317034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.317057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.321373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.321444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.321468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.325831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.325897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.325920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.330106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.330232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.330256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.334407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.334490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.334516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.338796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.338999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.339027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.343358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.343515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.343543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.347873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.348035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.348057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.352331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.352474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.352500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.145 [2024-07-25 14:07:52.356240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.145 [2024-07-25 14:07:52.356585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.145 [2024-07-25 14:07:52.356611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.360346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.360406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.360427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.364619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.364680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.364702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.368929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.368991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.369013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.373066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.373126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.373145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.377080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.377148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.377167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.381131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.381202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.381222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.385195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.385274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.385293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.389211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.389273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.389290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.392631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.393051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.393074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.396634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.396718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.396738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.400900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.400959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.400980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.405053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.405160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.405180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.409325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.409395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.409415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.413434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.413558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.413595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.417643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.417734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.417756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.422018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.422190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.422213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.426035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.426376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.426404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.430178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.430247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.430271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.434702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.434788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.434812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.439026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.439094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.439119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.146 [2024-07-25 14:07:52.443469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.146 [2024-07-25 14:07:52.443554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.146 [2024-07-25 14:07:52.443576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.406 [2024-07-25 14:07:52.447960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.406 [2024-07-25 14:07:52.448047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.406 [2024-07-25 14:07:52.448071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.406 [2024-07-25 14:07:52.452448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.406 [2024-07-25 14:07:52.452549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.406 [2024-07-25 14:07:52.452572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.406 [2024-07-25 14:07:52.456815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.406 [2024-07-25 14:07:52.456976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.406 [2024-07-25 14:07:52.456997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.406 [2024-07-25 14:07:52.461190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.406 [2024-07-25 14:07:52.461332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.406 [2024-07-25 14:07:52.461358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.406 [2024-07-25 14:07:52.465107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.406 [2024-07-25 14:07:52.465467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.406 [2024-07-25 14:07:52.465499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.406 [2024-07-25 14:07:52.469139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.406 [2024-07-25 14:07:52.469201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.406 [2024-07-25 14:07:52.469221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.406 [2024-07-25 14:07:52.473415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.406 [2024-07-25 14:07:52.473474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.406 [2024-07-25 14:07:52.473497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.477739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.477803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.477824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.481994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.482064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.482085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.486221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.486297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.486335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.490513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.490617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.490643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.494823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.494906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.494925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.499096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.499271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.499307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.503059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.503392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.503439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.507508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.507989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.508013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.511527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.511620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.511639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.515730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.515781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.515799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.519882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.519942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.519962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.523961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.524016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.524034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.527983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.528056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.528075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.531927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.532038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.532056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.535977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.536069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.536090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.540060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.540119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.540138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.543602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.544017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.544042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.547392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.407 [2024-07-25 14:07:52.547472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.407 [2024-07-25 14:07:52.547489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.407 [2024-07-25 14:07:52.551429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.551484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.551503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.555421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.555477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.555497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.559149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.559235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.559254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.563013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.563122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.563144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.566937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.567081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.567105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.570656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.570783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.570807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.574067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.574367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.574390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.577634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.577692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.577710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.581361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.581422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.581441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.585136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.585211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.585229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.589067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.589136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.589157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.593475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.593561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.593584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.597805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.597878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.597903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.602170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.602237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.602261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.606551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.606621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.606644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.610563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.610986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.611019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.614925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.615028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.615053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.619351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.619419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.408 [2024-07-25 14:07:52.619441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.408 [2024-07-25 14:07:52.623818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.408 [2024-07-25 14:07:52.623909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.623932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.628216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.628363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.628385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.632352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.632408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.632427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.636583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.636664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.636684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.640800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.640917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.640938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.644410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.644742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.644768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.648353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.648417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.648434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.652436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.652492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.652511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.656519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.656573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.656591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.660466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.660548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.660565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.664454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.664509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.664527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.668599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.668683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.668702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.672685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.672764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.672782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.676657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.676713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.676731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.680274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.680683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.680713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.684192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.684275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.684293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.688179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.688232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.688252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.692322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.692378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.692397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.409 [2024-07-25 14:07:52.696273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.409 [2024-07-25 14:07:52.696343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.409 [2024-07-25 14:07:52.696361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.410 [2024-07-25 14:07:52.700272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.410 [2024-07-25 14:07:52.700362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.410 [2024-07-25 14:07:52.700380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.410 [2024-07-25 14:07:52.704444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.410 [2024-07-25 14:07:52.704536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.410 [2024-07-25 14:07:52.704555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.410 [2024-07-25 14:07:52.708375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.410 [2024-07-25 14:07:52.708557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.410 [2024-07-25 14:07:52.708582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.670 [2024-07-25 14:07:52.711893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.670 [2024-07-25 14:07:52.712210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.670 [2024-07-25 14:07:52.712237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.670 [2024-07-25 14:07:52.715784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.670 [2024-07-25 14:07:52.715841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.670 [2024-07-25 14:07:52.715859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.670 [2024-07-25 14:07:52.719747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.670 [2024-07-25 14:07:52.719807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.670 [2024-07-25 14:07:52.719828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.670 [2024-07-25 14:07:52.723771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.670 [2024-07-25 14:07:52.723830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.670 [2024-07-25 14:07:52.723850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.670 [2024-07-25 14:07:52.727696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.670 [2024-07-25 14:07:52.727764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.670 [2024-07-25 14:07:52.727781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.670 [2024-07-25 14:07:52.731660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.670 [2024-07-25 14:07:52.731717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.670 [2024-07-25 14:07:52.731736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.670 [2024-07-25 14:07:52.735757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.670 [2024-07-25 14:07:52.735845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.670 [2024-07-25 14:07:52.735864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.670 [2024-07-25 14:07:52.739872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.670 [2024-07-25 14:07:52.739943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.670 [2024-07-25 14:07:52.739960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.670 [2024-07-25 14:07:52.743798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.670 [2024-07-25 14:07:52.743847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.670 [2024-07-25 14:07:52.743863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.670 [2024-07-25 14:07:52.747355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.670 [2024-07-25 14:07:52.747717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.670 [2024-07-25 14:07:52.747743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.670 [2024-07-25 14:07:52.751062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.670 [2024-07-25 14:07:52.751138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.670 [2024-07-25 14:07:52.751155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.670 [2024-07-25 14:07:52.754821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.670 [2024-07-25 14:07:52.754872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.754889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.758601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.758666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.758684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.762361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.762423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.762440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.766114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.766182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.766199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.769845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.769970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.769989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.773945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.774082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.774125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.777474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.777812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.777843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.781325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.781387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.781407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.785278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.785364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.785385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.789395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.789457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.789480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.793627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.793719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.793741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.798019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.798101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.798124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.802336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.802402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.802425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.806663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.806773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.806796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.811098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.811167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.811187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.814952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.815427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.815459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.819314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.819407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.819429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.823522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.823588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.823609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.827848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.827921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.827943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.832199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.832267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.832289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.836466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.836581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.836601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.840711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.840776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.840796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.844955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.845012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.845031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.849129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.849274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.849293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.852833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.853168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.853196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.856704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.856759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.856777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.860749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.860801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.671 [2024-07-25 14:07:52.860821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.671 [2024-07-25 14:07:52.864694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.671 [2024-07-25 14:07:52.864749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.864767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.868684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.868745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.868765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.872602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.872698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.872718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.876489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.876591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.876610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.880388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.880514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.880539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.884282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.884430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.884449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.888179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.888406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.888430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.891793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.892093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.892123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.895682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.895743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.895764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.899534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.899597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.899615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.903428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.903489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.903509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.907266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.907347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.907365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.911207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.911332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.911352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.915092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.915160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.915178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.918955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.919045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.919064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.922882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.923052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.923077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.926541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.926863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.926894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.930487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.930552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.930572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.934514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.934576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.934597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.938422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.938482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.938502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.942380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.942448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.942468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.946356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.946420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.946441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.950251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.950376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.950400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.954388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.954465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.954484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.958439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.958508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.958529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.962466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.962531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.962550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.965972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.672 [2024-07-25 14:07:52.966383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.672 [2024-07-25 14:07:52.966412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.672 [2024-07-25 14:07:52.969795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.673 [2024-07-25 14:07:52.969883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.673 [2024-07-25 14:07:52.969902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.933 [2024-07-25 14:07:52.973832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.933 [2024-07-25 14:07:52.973898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.933 [2024-07-25 14:07:52.973918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.933 [2024-07-25 14:07:52.977992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.933 [2024-07-25 14:07:52.978052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.933 [2024-07-25 14:07:52.978073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.933 [2024-07-25 14:07:52.982355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.933 [2024-07-25 14:07:52.982429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.933 [2024-07-25 14:07:52.982451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.933 [2024-07-25 14:07:52.986408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.933 [2024-07-25 14:07:52.986499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.933 [2024-07-25 14:07:52.986521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.933 [2024-07-25 14:07:52.990710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.933 [2024-07-25 14:07:52.990828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.933 [2024-07-25 14:07:52.990850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.933 [2024-07-25 14:07:52.994823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.933 [2024-07-25 14:07:52.994901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.933 [2024-07-25 14:07:52.994923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.933 [2024-07-25 14:07:52.998995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.933 [2024-07-25 14:07:52.999059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.933 [2024-07-25 14:07:52.999080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.933 [2024-07-25 14:07:53.002759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.933 [2024-07-25 14:07:53.003180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.933 [2024-07-25 14:07:53.003216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.933 [2024-07-25 14:07:53.006872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.933 [2024-07-25 14:07:53.006958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.933 [2024-07-25 14:07:53.006979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.933 [2024-07-25 14:07:53.010968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.933 [2024-07-25 14:07:53.011033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.933 [2024-07-25 14:07:53.011053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.933 [2024-07-25 14:07:53.015111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.933 [2024-07-25 14:07:53.015172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.933 [2024-07-25 14:07:53.015192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.933 [2024-07-25 14:07:53.019186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.933 [2024-07-25 14:07:53.019248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.019267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.023389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.023447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.023467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.027525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.027646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.027671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.031530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.031606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.031626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.035695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.035880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.035906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.040006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.040155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.040177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.044003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.044376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.044409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.047889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.047948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.047968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.052128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.052199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.052222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.056418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.056479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.056499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.060632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.060711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.060734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.064723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.064794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.064813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.068638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.068732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.068750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.072629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.072717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.072737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.076526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.076583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.076601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.079950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.080351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.080376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.083793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.083873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.083890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.087762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.087813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.087831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.091644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.091704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.091722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.095588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.095655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.095672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.099452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.099531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.099548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.103506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.103559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.103578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.107393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.107580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.107605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.110983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.111293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.111356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.114785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.114843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.114862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.118618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.118684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.118702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.122673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.934 [2024-07-25 14:07:53.122737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.934 [2024-07-25 14:07:53.122757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.934 [2024-07-25 14:07:53.126748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.126811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.126831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.130889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.130949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.130970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.134910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.134975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.134997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.138842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.138919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.138939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.142770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.142846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.142866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.146756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.146879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.146899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.150828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.150919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.150937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.154898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.154960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.154980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.158596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.158986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.159016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.162620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.162717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.162740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.166806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.166873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.166896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.171091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.171162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.171185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.175608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.175679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.175701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.179946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.180013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.180034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.184254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.184338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.184360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.188497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.188584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.188604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.192805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.192974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.192999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.197142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.197308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.197332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.201037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.201375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.201401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.205078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.205137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.205156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.209233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.209291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.209320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.213454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.213508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.213536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.217662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.217724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.217742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.221859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.221929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.221947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.225968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.226044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.226063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.230097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.230176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.230197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.935 [2024-07-25 14:07:53.234250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:43.935 [2024-07-25 14:07:53.234475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.935 [2024-07-25 14:07:53.234501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.238028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.238388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.238419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.242002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.242060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.242079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.246171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.246230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.246251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.250270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.250341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.250367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.254306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.254370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.254389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.258283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.258353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.258372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.262286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.262388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.262410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.266313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.266410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.266429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.270292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.270366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.270389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.273948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.274375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.274407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.278115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.278197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.278218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.282252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.282319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.282339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.286519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.286584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.286606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.290718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.290792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.290812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.294716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.294803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.294821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.298767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.298924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.298948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.302831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.302980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.303004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.306393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.306701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.306744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.310197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.310254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.310274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.314062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.314122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.314140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.317944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.317995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.318013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.196 [2024-07-25 14:07:53.321819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.196 [2024-07-25 14:07:53.321877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.196 [2024-07-25 14:07:53.321894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.325895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.325952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.325971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.329959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.330060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.330079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.334138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.334222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.334242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.338361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.338418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.338437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.342076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.342486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.342518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.346234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.346346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.346366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.350594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.350666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.350687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.355058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.355157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.355178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.359469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.359533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.359556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.363792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.363859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.363882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.368218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.368327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.368349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.372627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.372853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.372885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.376542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.376863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.376897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.380863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.380941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.380963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.385119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.385193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.385214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.389461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.389545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.389584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.393926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.394003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.394025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.398401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.398464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.398486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.402667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.402754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.402775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.406955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.407035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.407056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.411115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.411246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.411271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.414957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.415276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.415315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.418857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.418929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.418947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.422998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.423059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.423079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.427135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.427188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.427205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.431204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.197 [2024-07-25 14:07:53.431287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.197 [2024-07-25 14:07:53.431308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.197 [2024-07-25 14:07:53.435463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.198 [2024-07-25 14:07:53.435578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.198 [2024-07-25 14:07:53.435621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.198 [2024-07-25 14:07:53.439578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.198 [2024-07-25 14:07:53.439666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.198 [2024-07-25 14:07:53.439684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.198 [2024-07-25 14:07:53.443779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.198 [2024-07-25 14:07:53.443861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.198 [2024-07-25 14:07:53.443880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.198 [2024-07-25 14:07:53.447882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.198 [2024-07-25 14:07:53.447936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.198 [2024-07-25 14:07:53.447954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.198 [2024-07-25 14:07:53.451567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.198 [2024-07-25 14:07:53.451978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.198 [2024-07-25 14:07:53.452008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.198 [2024-07-25 14:07:53.455500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7e2220) with pdu=0x2000190fef90 00:19:44.198 [2024-07-25 14:07:53.455576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.198 [2024-07-25 14:07:53.455594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.198 00:19:44.198 Latency(us) 00:19:44.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.198 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:44.198 nvme0n1 : 2.00 7474.08 934.26 0.00 0.00 2136.77 1273.52 11905.23 00:19:44.198 =================================================================================================================== 00:19:44.198 Total : 7474.08 934.26 0.00 0.00 2136.77 1273.52 11905.23 00:19:44.198 0 00:19:44.198 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:44.198 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:44.198 | .driver_specific 00:19:44.198 | .nvme_error 00:19:44.198 | .status_code 00:19:44.198 | .command_transient_transport_error' 00:19:44.198 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:44.198 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:44.456 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 482 > 0 )) 00:19:44.456 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79765 00:19:44.456 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79765 ']' 00:19:44.456 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79765 00:19:44.456 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:19:44.456 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.456 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79765 00:19:44.456 killing process with pid 79765 00:19:44.456 Received shutdown signal, test time was about 2.000000 seconds 00:19:44.456 00:19:44.456 Latency(us) 00:19:44.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.456 =================================================================================================================== 00:19:44.456 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.456 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:44.456 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:44.456 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79765' 00:19:44.456 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79765 00:19:44.456 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79765 00:19:44.715 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79563 00:19:44.715 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79563 ']' 00:19:44.715 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79563 00:19:44.715 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:19:44.715 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.715 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79563 00:19:44.715 killing process with pid 79563 00:19:44.715 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:44.715 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:44.715 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79563' 00:19:44.715 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79563 00:19:44.715 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79563 00:19:44.972 ************************************ 00:19:44.972 END TEST nvmf_digest_error 00:19:44.972 ************************************ 00:19:44.972 00:19:44.972 real 0m17.467s 00:19:44.972 user 0m33.546s 00:19:44.972 sys 0m4.411s 00:19:44.972 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:44.972 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:44.972 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:44.972 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:44.972 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:44.972 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:19:44.972 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.972 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:19:44.972 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.972 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.972 rmmod nvme_tcp 00:19:44.972 rmmod nvme_fabrics 00:19:44.972 rmmod nvme_keyring 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 79563 ']' 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 79563 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 79563 ']' 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 79563 00:19:45.231 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79563) - No such process 00:19:45.231 Process with pid 79563 is not found 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 79563 is not found' 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:45.231 00:19:45.231 real 0m35.897s 00:19:45.231 user 1m7.543s 00:19:45.231 sys 0m9.133s 00:19:45.231 ************************************ 00:19:45.231 END TEST nvmf_digest 00:19:45.231 ************************************ 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.231 ************************************ 00:19:45.231 START TEST nvmf_host_multipath 00:19:45.231 ************************************ 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:45.231 * Looking for test storage... 00:19:45.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.231 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:45.491 Cannot find device "nvmf_tgt_br" 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:45.491 Cannot find device "nvmf_tgt_br2" 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:45.491 Cannot find device "nvmf_tgt_br" 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:45.491 Cannot find device "nvmf_tgt_br2" 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:45.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:45.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:45.491 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:45.492 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:45.492 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:45.492 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:45.492 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:45.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:19:45.750 00:19:45.750 --- 10.0.0.2 ping statistics --- 00:19:45.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.750 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:45.750 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:45.750 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:19:45.750 00:19:45.750 --- 10.0.0.3 ping statistics --- 00:19:45.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.750 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:45.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:45.750 00:19:45.750 --- 10.0.0.1 ping statistics --- 00:19:45.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.750 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.750 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80028 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80028 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80028 ']' 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.751 14:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:45.751 [2024-07-25 14:07:54.980193] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:19:45.751 [2024-07-25 14:07:54.980269] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.009 [2024-07-25 14:07:55.118007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:46.009 [2024-07-25 14:07:55.221764] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.009 [2024-07-25 14:07:55.221806] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.009 [2024-07-25 14:07:55.221813] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.009 [2024-07-25 14:07:55.221819] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.009 [2024-07-25 14:07:55.221824] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.009 [2024-07-25 14:07:55.221992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.009 [2024-07-25 14:07:55.221994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.009 [2024-07-25 14:07:55.263991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:46.577 14:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.577 14:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:19:46.577 14:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.577 14:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:46.577 14:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:46.837 14:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.837 14:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80028 00:19:46.837 14:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:46.837 [2024-07-25 14:07:56.075077] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.837 14:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:47.096 Malloc0 00:19:47.096 14:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:47.355 14:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:47.614 14:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.873 [2024-07-25 14:07:56.929898] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.873 14:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:47.873 [2024-07-25 14:07:57.109662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:47.873 14:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80077 00:19:47.873 14:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:47.873 14:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.873 14:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80077 /var/tmp/bdevperf.sock 00:19:47.873 14:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80077 ']' 00:19:47.874 14:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.874 14:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.874 14:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.874 14:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.874 14:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:48.810 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:48.810 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:19:48.810 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:49.070 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:49.329 Nvme0n1 00:19:49.329 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:49.588 Nvme0n1 00:19:49.847 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:49.847 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:19:50.785 14:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:19:50.785 14:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:51.043 14:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:51.302 14:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:19:51.302 14:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:51.302 14:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80128 00:19:51.302 14:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:57.866 Attaching 4 probes... 00:19:57.866 @path[10.0.0.2, 4421]: 19263 00:19:57.866 @path[10.0.0.2, 4421]: 20258 00:19:57.866 @path[10.0.0.2, 4421]: 20117 00:19:57.866 @path[10.0.0.2, 4421]: 21377 00:19:57.866 @path[10.0.0.2, 4421]: 21272 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80128 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:57.866 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:57.866 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:57.866 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80240 00:19:57.866 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:57.866 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:04.452 Attaching 4 probes... 00:20:04.452 @path[10.0.0.2, 4420]: 19799 00:20:04.452 @path[10.0.0.2, 4420]: 20132 00:20:04.452 @path[10.0.0.2, 4420]: 20010 00:20:04.452 @path[10.0.0.2, 4420]: 20208 00:20:04.452 @path[10.0.0.2, 4420]: 20415 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80240 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:04.452 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:04.711 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:20:04.711 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80358 00:20:04.711 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:04.711 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:11.278 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:11.278 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:11.278 Attaching 4 probes... 00:20:11.278 @path[10.0.0.2, 4421]: 15195 00:20:11.278 @path[10.0.0.2, 4421]: 19767 00:20:11.278 @path[10.0.0.2, 4421]: 19904 00:20:11.278 @path[10.0.0.2, 4421]: 19742 00:20:11.278 @path[10.0.0.2, 4421]: 20016 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80358 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80465 00:20:11.278 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:17.842 Attaching 4 probes... 00:20:17.842 00:20:17.842 00:20:17.842 00:20:17.842 00:20:17.842 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80465 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:17.842 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:17.842 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:20:17.842 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80583 00:20:17.842 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:17.842 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:24.404 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:24.405 Attaching 4 probes... 00:20:24.405 @path[10.0.0.2, 4421]: 19244 00:20:24.405 @path[10.0.0.2, 4421]: 19837 00:20:24.405 @path[10.0.0.2, 4421]: 19396 00:20:24.405 @path[10.0.0.2, 4421]: 19565 00:20:24.405 @path[10.0.0.2, 4421]: 19973 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80583 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:24.405 14:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:25.392 14:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:25.392 14:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80706 00:20:25.392 14:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:25.392 14:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:31.949 Attaching 4 probes... 00:20:31.949 @path[10.0.0.2, 4420]: 18921 00:20:31.949 @path[10.0.0.2, 4420]: 19229 00:20:31.949 @path[10.0.0.2, 4420]: 19131 00:20:31.949 @path[10.0.0.2, 4420]: 20648 00:20:31.949 @path[10.0.0.2, 4420]: 19927 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80706 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:31.949 14:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:31.949 [2024-07-25 14:08:41.048621] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:31.949 14:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:32.207 14:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:38.772 14:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:38.772 14:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80875 00:20:38.773 14:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:38.773 14:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:44.044 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:44.044 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:44.302 Attaching 4 probes... 00:20:44.302 @path[10.0.0.2, 4421]: 22184 00:20:44.302 @path[10.0.0.2, 4421]: 23319 00:20:44.302 @path[10.0.0.2, 4421]: 23197 00:20:44.302 @path[10.0.0.2, 4421]: 22858 00:20:44.302 @path[10.0.0.2, 4421]: 21839 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80875 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80077 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80077 ']' 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80077 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80077 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:44.302 killing process with pid 80077 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80077' 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80077 00:20:44.302 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80077 00:20:44.568 Connection closed with partial response: 00:20:44.568 00:20:44.568 00:20:44.568 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80077 00:20:44.568 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:44.568 [2024-07-25 14:07:57.179832] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:20:44.568 [2024-07-25 14:07:57.179988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80077 ] 00:20:44.568 [2024-07-25 14:07:57.316019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.568 [2024-07-25 14:07:57.419599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.568 [2024-07-25 14:07:57.460518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:44.568 Running I/O for 90 seconds... 00:20:44.568 [2024-07-25 14:08:07.095293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.568 [2024-07-25 14:08:07.095375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:44.568 [2024-07-25 14:08:07.095439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.568 [2024-07-25 14:08:07.095452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.095480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.095508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.095535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.095563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.095590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.095617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.095989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.095999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.096027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.096060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.096088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.096120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.096147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.096175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.096202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.096229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.096257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.096285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.569 [2024-07-25 14:08:07.096330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.096358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.096383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.096414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.096440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-07-25 14:08:07.096466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:44.569 [2024-07-25 14:08:07.096491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.096500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.096514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.096523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.096538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.096546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.096561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.096569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.096583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.096592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.096607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.096615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.096629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.096637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.096651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.096661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.096676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.096685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.096700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.096708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.096726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.096735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.570 [2024-07-25 14:08:07.097859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.097887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.097915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.097943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.097975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.097993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.098003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.098020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-07-25 14:08:07.098031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:44.570 [2024-07-25 14:08:07.098048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.571 [2024-07-25 14:08:07.098350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.571 [2024-07-25 14:08:07.098383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.571 [2024-07-25 14:08:07.098411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.571 [2024-07-25 14:08:07.098440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.571 [2024-07-25 14:08:07.098470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.571 [2024-07-25 14:08:07.098496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.571 [2024-07-25 14:08:07.098523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.571 [2024-07-25 14:08:07.098551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.571 [2024-07-25 14:08:07.098578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.571 [2024-07-25 14:08:07.098608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.571 [2024-07-25 14:08:07.098635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.571 [2024-07-25 14:08:07.098666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:44.571 [2024-07-25 14:08:07.098959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.571 [2024-07-25 14:08:07.098967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.572 [2024-07-25 14:08:07.099012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.572 [2024-07-25 14:08:07.099041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.572 [2024-07-25 14:08:07.099066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.572 [2024-07-25 14:08:07.099091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:07.099580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:07.099590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:13.568314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:13.568373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:13.568399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:13.568424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:13.568473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:13.568499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:13.568524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.572 [2024-07-25 14:08:13.568548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.572 [2024-07-25 14:08:13.568573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.572 [2024-07-25 14:08:13.568598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.572 [2024-07-25 14:08:13.568622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.572 [2024-07-25 14:08:13.568647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.572 [2024-07-25 14:08:13.568671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.572 [2024-07-25 14:08:13.568697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:44.572 [2024-07-25 14:08:13.568712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.568721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.568737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.568746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.568761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.568777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.568793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.568803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.568818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.568828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.568844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.568853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.568868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.568877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.568893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.568902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.568917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.568926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.568942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.568951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.568970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.568979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.568995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.573 [2024-07-25 14:08:13.569401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.569426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.569456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:44.573 [2024-07-25 14:08:13.569471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.573 [2024-07-25 14:08:13.569480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.569859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.569889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.569916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.569942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.569969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.569986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.569996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.570023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.570049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.570076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.570102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.570133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.570159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.570186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.570212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.570240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.570267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.574 [2024-07-25 14:08:13.570294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.570328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.570355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.570381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.570408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.570437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:44.574 [2024-07-25 14:08:13.570454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.574 [2024-07-25 14:08:13.570468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.570494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.570521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.570547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.570574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.570600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.570626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.570653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.570680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.570707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.570734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.570774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.570802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.570827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.570851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.570877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.570902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.570927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.570951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.570976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.570992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.571001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.571025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.571050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.571074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.571101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.571147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.571184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.571209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.571233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.571258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.571283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.571309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.571342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.571368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.571999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.575 [2024-07-25 14:08:13.572022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.572048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.575 [2024-07-25 14:08:13.572058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:44.575 [2024-07-25 14:08:13.572082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:13.572616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:13.572626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.576 [2024-07-25 14:08:20.432678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.576 [2024-07-25 14:08:20.432700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.576 [2024-07-25 14:08:20.432722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.576 [2024-07-25 14:08:20.432745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.576 [2024-07-25 14:08:20.432767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.576 [2024-07-25 14:08:20.432789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:44.576 [2024-07-25 14:08:20.432803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.576 [2024-07-25 14:08:20.432811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.432825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.577 [2024-07-25 14:08:20.432832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.432852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.577 [2024-07-25 14:08:20.432860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.432877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.432885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.432899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.432907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.432938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.432947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.432963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.432972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.432987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.432997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.577 [2024-07-25 14:08:20.433305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.577 [2024-07-25 14:08:20.433338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.577 [2024-07-25 14:08:20.433364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.577 [2024-07-25 14:08:20.433389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.577 [2024-07-25 14:08:20.433414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.577 [2024-07-25 14:08:20.433440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.577 [2024-07-25 14:08:20.433465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.577 [2024-07-25 14:08:20.433499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.577 [2024-07-25 14:08:20.433765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:44.577 [2024-07-25 14:08:20.433782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.433792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.433809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.433818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.433835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.433850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.433867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.433877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.433893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.433903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.433919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.433929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.433947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.433956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.433973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.433982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.578 [2024-07-25 14:08:20.434398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.434849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.434884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.434916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.434947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.434979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.434989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.435010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.435019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.435041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.435050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.435071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.578 [2024-07-25 14:08:20.435081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:44.578 [2024-07-25 14:08:20.435102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.435112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.435142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.435179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.435210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.435241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.435271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.435301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.435332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.579 [2024-07-25 14:08:20.435857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.435888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.435920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.435951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.435973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.435982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.436003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.436012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.436033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.436042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.436063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.436073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.436094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.436103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.436124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.436138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.436159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.436169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.436192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.436201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.436222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.436232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.436253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.436262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:44.579 [2024-07-25 14:08:20.436284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.579 [2024-07-25 14:08:20.436293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:20.436323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:20.436332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:20.436354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:20.436363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.605631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.605976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.605989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.606005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.606014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.606030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.580 [2024-07-25 14:08:33.606039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.606076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.606104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.606116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.606126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.606137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.606147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.580 [2024-07-25 14:08:33.606158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.580 [2024-07-25 14:08:33.606168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.581 [2024-07-25 14:08:33.606772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.581 [2024-07-25 14:08:33.606984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.581 [2024-07-25 14:08:33.606994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.582 [2024-07-25 14:08:33.607014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.582 [2024-07-25 14:08:33.607033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.582 [2024-07-25 14:08:33.607053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.582 [2024-07-25 14:08:33.607073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.582 [2024-07-25 14:08:33.607093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.582 [2024-07-25 14:08:33.607617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.582 [2024-07-25 14:08:33.607636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.582 [2024-07-25 14:08:33.607662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.582 [2024-07-25 14:08:33.607681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.582 [2024-07-25 14:08:33.607701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.582 [2024-07-25 14:08:33.607720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.582 [2024-07-25 14:08:33.607740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.582 [2024-07-25 14:08:33.607759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d680 is same with the state(5) to be set 00:20:44.582 [2024-07-25 14:08:33.607781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.582 [2024-07-25 14:08:33.607788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.582 [2024-07-25 14:08:33.607795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22120 len:8 PRP1 0x0 PRP2 0x0 00:20:44.582 [2024-07-25 14:08:33.607805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.582 [2024-07-25 14:08:33.607814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.582 [2024-07-25 14:08:33.607820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.582 [2024-07-25 14:08:33.607827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22640 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.607842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.607852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.607858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.607864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22648 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.607873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.607883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.607889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.607896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.607910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.607919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.607925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.607932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22664 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.607941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.607967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.607973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.607981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22672 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.607990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.608000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.608007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.608014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22680 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.608024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.608033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.608040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.608047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.608056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.608067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.608073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.608080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22696 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.608090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.608099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.608106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.608113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22704 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.608125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.608135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.608142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.608149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22712 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.608159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.608168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.608176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.608188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.608198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.608208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.608214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.608221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22728 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.608242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.608251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.608258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.608265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22736 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.608273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.608282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.608289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.608295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22744 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.608307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.608316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.608329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.608336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.608345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.608355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.583 [2024-07-25 14:08:33.608361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.583 [2024-07-25 14:08:33.608368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22760 len:8 PRP1 0x0 PRP2 0x0 00:20:44.583 [2024-07-25 14:08:33.608376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.626953] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x232d680 was disconnected and freed. reset controller. 00:20:44.583 [2024-07-25 14:08:33.627163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:44.583 [2024-07-25 14:08:33.627197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.627219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:44.583 [2024-07-25 14:08:33.627237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.627256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:44.583 [2024-07-25 14:08:33.627273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.627345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:44.583 [2024-07-25 14:08:33.627364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.627383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.583 [2024-07-25 14:08:33.627400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.583 [2024-07-25 14:08:33.627426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22af100 is same with the state(5) to be set 00:20:44.583 [2024-07-25 14:08:33.629321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:44.583 [2024-07-25 14:08:33.629372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22af100 (9): Bad file descriptor 00:20:44.583 [2024-07-25 14:08:33.629971] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:44.583 [2024-07-25 14:08:33.630009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22af100 with addr=10.0.0.2, port=4421 00:20:44.584 [2024-07-25 14:08:33.630029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22af100 is same with the state(5) to be set 00:20:44.584 [2024-07-25 14:08:33.630121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22af100 (9): Bad file descriptor 00:20:44.584 [2024-07-25 14:08:33.630159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:44.584 [2024-07-25 14:08:33.630177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:44.584 [2024-07-25 14:08:33.630195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:44.584 [2024-07-25 14:08:33.630234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:44.584 [2024-07-25 14:08:33.630251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:44.584 [2024-07-25 14:08:43.674896] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:44.584 Received shutdown signal, test time was about 54.641883 seconds 00:20:44.584 00:20:44.584 Latency(us) 00:20:44.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.584 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:44.584 Verification LBA range: start 0x0 length 0x4000 00:20:44.584 Nvme0n1 : 54.64 8651.43 33.79 0.00 0.00 14773.63 965.87 7033243.39 00:20:44.584 =================================================================================================================== 00:20:44.584 Total : 8651.43 33.79 0.00 0.00 14773.63 965.87 7033243.39 00:20:44.584 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:44.842 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:20:44.842 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:44.842 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:20:44.842 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:44.842 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:20:44.842 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:44.842 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:20:44.842 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:44.842 14:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:44.842 rmmod nvme_tcp 00:20:44.842 rmmod nvme_fabrics 00:20:44.842 rmmod nvme_keyring 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80028 ']' 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80028 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80028 ']' 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80028 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80028 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80028' 00:20:44.842 killing process with pid 80028 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80028 00:20:44.842 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80028 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:45.101 00:20:45.101 real 0m59.913s 00:20:45.101 user 2m48.942s 00:20:45.101 sys 0m15.506s 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:45.101 ************************************ 00:20:45.101 END TEST nvmf_host_multipath 00:20:45.101 ************************************ 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.101 ************************************ 00:20:45.101 START TEST nvmf_timeout 00:20:45.101 ************************************ 00:20:45.101 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:45.359 * Looking for test storage... 00:20:45.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:45.359 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:45.360 Cannot find device "nvmf_tgt_br" 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:45.360 Cannot find device "nvmf_tgt_br2" 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:45.360 Cannot find device "nvmf_tgt_br" 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:45.360 Cannot find device "nvmf_tgt_br2" 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:20:45.360 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:45.618 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:45.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:45.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:45.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:20:45.619 00:20:45.619 --- 10.0.0.2 ping statistics --- 00:20:45.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.619 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:45.619 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:45.619 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:20:45.619 00:20:45.619 --- 10.0.0.3 ping statistics --- 00:20:45.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.619 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:45.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:20:45.619 00:20:45.619 --- 10.0.0.1 ping statistics --- 00:20:45.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.619 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81186 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81186 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81186 ']' 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:45.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:45.619 14:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:45.878 [2024-07-25 14:08:54.935131] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:20:45.878 [2024-07-25 14:08:54.935205] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.878 [2024-07-25 14:08:55.071871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:45.878 [2024-07-25 14:08:55.173629] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.878 [2024-07-25 14:08:55.173871] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.878 [2024-07-25 14:08:55.173883] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.878 [2024-07-25 14:08:55.173889] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.878 [2024-07-25 14:08:55.173894] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.878 [2024-07-25 14:08:55.174004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.878 [2024-07-25 14:08:55.174089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.136 [2024-07-25 14:08:55.216581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:46.705 14:08:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:46.705 14:08:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:46.705 14:08:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.705 14:08:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:46.705 14:08:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:46.705 14:08:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.705 14:08:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.705 14:08:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:46.965 [2024-07-25 14:08:56.024368] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.966 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:46.966 Malloc0 00:20:46.966 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:47.225 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:47.483 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.742 [2024-07-25 14:08:56.850941] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.742 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:47.742 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81235 00:20:47.742 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81235 /var/tmp/bdevperf.sock 00:20:47.742 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81235 ']' 00:20:47.742 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.742 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.742 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.742 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.742 14:08:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:47.742 [2024-07-25 14:08:56.896904] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:20:47.742 [2024-07-25 14:08:56.896965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81235 ] 00:20:47.742 [2024-07-25 14:08:57.034918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.001 [2024-07-25 14:08:57.133653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.001 [2024-07-25 14:08:57.174900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:48.568 14:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:48.568 14:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:48.568 14:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:48.827 14:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:49.086 NVMe0n1 00:20:49.086 14:08:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81253 00:20:49.086 14:08:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:49.086 14:08:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:20:49.086 Running I/O for 10 seconds... 00:20:50.025 14:08:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:50.288 [2024-07-25 14:08:59.374993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.288 [2024-07-25 14:08:59.375046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.288 [2024-07-25 14:08:59.375070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.288 [2024-07-25 14:08:59.375083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.288 [2024-07-25 14:08:59.375095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.288 [2024-07-25 14:08:59.375107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.288 [2024-07-25 14:08:59.375119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.288 [2024-07-25 14:08:59.375131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.288 [2024-07-25 14:08:59.375143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.288 [2024-07-25 14:08:59.375155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.288 [2024-07-25 14:08:59.375167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.288 [2024-07-25 14:08:59.375180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.288 [2024-07-25 14:08:59.375192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.288 [2024-07-25 14:08:59.375204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.288 [2024-07-25 14:08:59.375218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.288 [2024-07-25 14:08:59.375230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.288 [2024-07-25 14:08:59.375237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.288 [2024-07-25 14:08:59.375242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.289 [2024-07-25 14:08:59.375500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.289 [2024-07-25 14:08:59.375513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.289 [2024-07-25 14:08:59.375525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.289 [2024-07-25 14:08:59.375537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.289 [2024-07-25 14:08:59.375548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.289 [2024-07-25 14:08:59.375573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.289 [2024-07-25 14:08:59.375585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.289 [2024-07-25 14:08:59.375597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.289 [2024-07-25 14:08:59.375765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.289 [2024-07-25 14:08:59.375771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.375776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.375788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.290 [2024-07-25 14:08:59.375800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.290 [2024-07-25 14:08:59.375812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.290 [2024-07-25 14:08:59.375824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.290 [2024-07-25 14:08:59.375835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.290 [2024-07-25 14:08:59.375848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.290 [2024-07-25 14:08:59.375859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.290 [2024-07-25 14:08:59.375872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.290 [2024-07-25 14:08:59.375884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.375896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.375908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.375921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.375949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.375963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.375970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.375976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.290 [2024-07-25 14:08:59.376326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.290 [2024-07-25 14:08:59.376339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.290 [2024-07-25 14:08:59.376353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.290 [2024-07-25 14:08:59.376366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.290 [2024-07-25 14:08:59.376380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.290 [2024-07-25 14:08:59.376387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.291 [2024-07-25 14:08:59.376393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.291 [2024-07-25 14:08:59.376414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.291 [2024-07-25 14:08:59.376430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.291 [2024-07-25 14:08:59.376443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.291 [2024-07-25 14:08:59.376456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.291 [2024-07-25 14:08:59.376470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.291 [2024-07-25 14:08:59.376491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.291 [2024-07-25 14:08:59.376505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.291 [2024-07-25 14:08:59.376518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.291 [2024-07-25 14:08:59.376531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.291 [2024-07-25 14:08:59.376551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.291 [2024-07-25 14:08:59.376564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.291 [2024-07-25 14:08:59.376794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fe1b0 is same with the state(5) to be set 00:20:50.291 [2024-07-25 14:08:59.376817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.291 [2024-07-25 14:08:59.376821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.291 [2024-07-25 14:08:59.376827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101816 len:8 PRP1 0x0 PRP2 0x0 00:20:50.291 [2024-07-25 14:08:59.376833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.291 [2024-07-25 14:08:59.376844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.291 [2024-07-25 14:08:59.376849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102144 len:8 PRP1 0x0 PRP2 0x0 00:20:50.291 [2024-07-25 14:08:59.376855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.291 [2024-07-25 14:08:59.376865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.291 [2024-07-25 14:08:59.376870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102152 len:8 PRP1 0x0 PRP2 0x0 00:20:50.291 [2024-07-25 14:08:59.376877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.291 [2024-07-25 14:08:59.376890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.291 [2024-07-25 14:08:59.376896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102160 len:8 PRP1 0x0 PRP2 0x0 00:20:50.291 [2024-07-25 14:08:59.376902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.291 [2024-07-25 14:08:59.376912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.291 [2024-07-25 14:08:59.376917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102168 len:8 PRP1 0x0 PRP2 0x0 00:20:50.291 [2024-07-25 14:08:59.376922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.291 [2024-07-25 14:08:59.376940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.291 [2024-07-25 14:08:59.376945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102176 len:8 PRP1 0x0 PRP2 0x0 00:20:50.291 [2024-07-25 14:08:59.376950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.291 [2024-07-25 14:08:59.376957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.291 [2024-07-25 14:08:59.376961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.291 [2024-07-25 14:08:59.376970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102184 len:8 PRP1 0x0 PRP2 0x0 00:20:50.291 [2024-07-25 14:08:59.376976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.292 [2024-07-25 14:08:59.376982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.292 [2024-07-25 14:08:59.376987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.292 [2024-07-25 14:08:59.376992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102192 len:8 PRP1 0x0 PRP2 0x0 00:20:50.292 [2024-07-25 14:08:59.376997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.292 [2024-07-25 14:08:59.377003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.292 [2024-07-25 14:08:59.377019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.292 [2024-07-25 14:08:59.377024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102200 len:8 PRP1 0x0 PRP2 0x0 00:20:50.292 [2024-07-25 14:08:59.377030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.292 [2024-07-25 14:08:59.377072] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17fe1b0 was disconnected and freed. reset controller. 00:20:50.292 [2024-07-25 14:08:59.377325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:50.292 [2024-07-25 14:08:59.377392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178dd40 (9): Bad file descriptor 00:20:50.292 [2024-07-25 14:08:59.377468] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.292 [2024-07-25 14:08:59.377479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178dd40 with addr=10.0.0.2, port=4420 00:20:50.292 [2024-07-25 14:08:59.377486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dd40 is same with the state(5) to be set 00:20:50.292 [2024-07-25 14:08:59.377497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178dd40 (9): Bad file descriptor 00:20:50.292 [2024-07-25 14:08:59.377509] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:50.292 [2024-07-25 14:08:59.377524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:50.292 [2024-07-25 14:08:59.377532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:50.292 [2024-07-25 14:08:59.377547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:50.292 [2024-07-25 14:08:59.377554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:50.292 14:08:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:20:52.194 [2024-07-25 14:09:01.373905] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.194 [2024-07-25 14:09:01.374022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178dd40 with addr=10.0.0.2, port=4420 00:20:52.194 [2024-07-25 14:09:01.374066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dd40 is same with the state(5) to be set 00:20:52.194 [2024-07-25 14:09:01.374112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178dd40 (9): Bad file descriptor 00:20:52.194 [2024-07-25 14:09:01.374185] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:52.194 [2024-07-25 14:09:01.374237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:52.194 [2024-07-25 14:09:01.374317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:52.194 [2024-07-25 14:09:01.374372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.194 [2024-07-25 14:09:01.374409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:52.194 14:09:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:20:52.194 14:09:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:52.194 14:09:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:52.452 14:09:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:20:52.452 14:09:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:20:52.452 14:09:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:52.452 14:09:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:52.709 14:09:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:20:52.709 14:09:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:20:54.155 [2024-07-25 14:09:03.370733] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:54.155 [2024-07-25 14:09:03.370862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178dd40 with addr=10.0.0.2, port=4420 00:20:54.155 [2024-07-25 14:09:03.370901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dd40 is same with the state(5) to be set 00:20:54.155 [2024-07-25 14:09:03.370944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178dd40 (9): Bad file descriptor 00:20:54.155 [2024-07-25 14:09:03.370986] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:54.155 [2024-07-25 14:09:03.371042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:54.155 [2024-07-25 14:09:03.371096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:54.155 [2024-07-25 14:09:03.371144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:54.155 [2024-07-25 14:09:03.371177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:56.077 [2024-07-25 14:09:05.367418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:56.077 [2024-07-25 14:09:05.367565] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:56.077 [2024-07-25 14:09:05.367601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:56.077 [2024-07-25 14:09:05.367628] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:56.077 [2024-07-25 14:09:05.367659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.451 00:20:57.451 Latency(us) 00:20:57.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.451 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.451 Verification LBA range: start 0x0 length 0x4000 00:20:57.451 NVMe0n1 : 8.09 1564.20 6.11 15.83 0.00 81088.55 2747.36 7033243.39 00:20:57.451 =================================================================================================================== 00:20:57.451 Total : 1564.20 6.11 15.83 0.00 81088.55 2747.36 7033243.39 00:20:57.451 0 00:20:57.709 14:09:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:57.709 14:09:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:57.709 14:09:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:57.709 14:09:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:57.709 14:09:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:57.709 14:09:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:57.709 14:09:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81253 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81235 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81235 ']' 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81235 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81235 00:20:57.967 killing process with pid 81235 00:20:57.967 Received shutdown signal, test time was about 8.917525 seconds 00:20:57.967 00:20:57.967 Latency(us) 00:20:57.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.967 =================================================================================================================== 00:20:57.967 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81235' 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81235 00:20:57.967 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81235 00:20:58.225 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.483 [2024-07-25 14:09:07.543445] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.483 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81370 00:20:58.483 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:58.483 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81370 /var/tmp/bdevperf.sock 00:20:58.483 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81370 ']' 00:20:58.483 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.483 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.483 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.483 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.483 14:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:58.483 [2024-07-25 14:09:07.613861] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:20:58.483 [2024-07-25 14:09:07.613993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81370 ] 00:20:58.483 [2024-07-25 14:09:07.739237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.740 [2024-07-25 14:09:07.835206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.740 [2024-07-25 14:09:07.875492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:59.304 14:09:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.304 14:09:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:59.304 14:09:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:59.562 14:09:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:59.821 NVMe0n1 00:20:59.821 14:09:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81388 00:20:59.821 14:09:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:59.821 14:09:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:59.821 Running I/O for 10 seconds... 00:21:00.757 14:09:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.016 [2024-07-25 14:09:10.076981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.016 [2024-07-25 14:09:10.077032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.016 [2024-07-25 14:09:10.077049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.016 [2024-07-25 14:09:10.077055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.016 [2024-07-25 14:09:10.077062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.016 [2024-07-25 14:09:10.077068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.016 [2024-07-25 14:09:10.077074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.016 [2024-07-25 14:09:10.077079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.016 [2024-07-25 14:09:10.077086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.016 [2024-07-25 14:09:10.077091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.016 [2024-07-25 14:09:10.077097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.017 [2024-07-25 14:09:10.077442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.017 [2024-07-25 14:09:10.077594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.017 [2024-07-25 14:09:10.077599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.077612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.077630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.077650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.077662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.077864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.077893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.077906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.077919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.077932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.077945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.077958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.077972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.077988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.077995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.078001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.078008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.078014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.078021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.078027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.078035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.078080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.078095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.078102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.078109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.078116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.078124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.018 [2024-07-25 14:09:10.078130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.078138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.078165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.078175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.078181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.078189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.018 [2024-07-25 14:09:10.078195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.018 [2024-07-25 14:09:10.078203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.019 [2024-07-25 14:09:10.078630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.019 [2024-07-25 14:09:10.078658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.019 [2024-07-25 14:09:10.078673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.019 [2024-07-25 14:09:10.078687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.019 [2024-07-25 14:09:10.078701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.019 [2024-07-25 14:09:10.078714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.019 [2024-07-25 14:09:10.078732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.019 [2024-07-25 14:09:10.078748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115aee0 is same with the state(5) to be set 00:21:01.019 [2024-07-25 14:09:10.078766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.019 [2024-07-25 14:09:10.078778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.019 [2024-07-25 14:09:10.078784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98872 len:8 PRP1 0x0 PRP2 0x0 00:21:01.019 [2024-07-25 14:09:10.078794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.019 [2024-07-25 14:09:10.078807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.019 [2024-07-25 14:09:10.078812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99328 len:8 PRP1 0x0 PRP2 0x0 00:21:01.019 [2024-07-25 14:09:10.078820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.019 [2024-07-25 14:09:10.078831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.019 [2024-07-25 14:09:10.078836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99336 len:8 PRP1 0x0 PRP2 0x0 00:21:01.019 [2024-07-25 14:09:10.078842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.019 [2024-07-25 14:09:10.078853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.019 [2024-07-25 14:09:10.078865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99344 len:8 PRP1 0x0 PRP2 0x0 00:21:01.019 [2024-07-25 14:09:10.078871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.019 [2024-07-25 14:09:10.078882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.019 [2024-07-25 14:09:10.078887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99352 len:8 PRP1 0x0 PRP2 0x0 00:21:01.019 [2024-07-25 14:09:10.078893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.019 [2024-07-25 14:09:10.078904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.019 [2024-07-25 14:09:10.078908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99360 len:8 PRP1 0x0 PRP2 0x0 00:21:01.019 [2024-07-25 14:09:10.078914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.019 [2024-07-25 14:09:10.078920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.019 [2024-07-25 14:09:10.078924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.020 [2024-07-25 14:09:10.078935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99368 len:8 PRP1 0x0 PRP2 0x0 00:21:01.020 [2024-07-25 14:09:10.078942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.078948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.020 [2024-07-25 14:09:10.078953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.020 [2024-07-25 14:09:10.078968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99376 len:8 PRP1 0x0 PRP2 0x0 00:21:01.020 [2024-07-25 14:09:10.078974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.078980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.020 [2024-07-25 14:09:10.078985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.020 [2024-07-25 14:09:10.078990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99384 len:8 PRP1 0x0 PRP2 0x0 00:21:01.020 [2024-07-25 14:09:10.079008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.079014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.020 [2024-07-25 14:09:10.079019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.020 [2024-07-25 14:09:10.079025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99392 len:8 PRP1 0x0 PRP2 0x0 00:21:01.020 [2024-07-25 14:09:10.079030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.079036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.020 [2024-07-25 14:09:10.079052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.020 [2024-07-25 14:09:10.079058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99400 len:8 PRP1 0x0 PRP2 0x0 00:21:01.020 [2024-07-25 14:09:10.079063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.079093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.020 [2024-07-25 14:09:10.079098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.020 [2024-07-25 14:09:10.079104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99408 len:8 PRP1 0x0 PRP2 0x0 00:21:01.020 [2024-07-25 14:09:10.079109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.079116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.020 [2024-07-25 14:09:10.079131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.020 [2024-07-25 14:09:10.079141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99416 len:8 PRP1 0x0 PRP2 0x0 00:21:01.020 [2024-07-25 14:09:10.079147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.079157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.020 [2024-07-25 14:09:10.079162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.020 [2024-07-25 14:09:10.079166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99424 len:8 PRP1 0x0 PRP2 0x0 00:21:01.020 [2024-07-25 14:09:10.079171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.079177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.020 [2024-07-25 14:09:10.079186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.020 [2024-07-25 14:09:10.079191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99432 len:8 PRP1 0x0 PRP2 0x0 00:21:01.020 [2024-07-25 14:09:10.079197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.079203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.020 [2024-07-25 14:09:10.079207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.020 [2024-07-25 14:09:10.079211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99440 len:8 PRP1 0x0 PRP2 0x0 00:21:01.020 [2024-07-25 14:09:10.079216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.079228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.020 [2024-07-25 14:09:10.079232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.020 [2024-07-25 14:09:10.079237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99448 len:8 PRP1 0x0 PRP2 0x0 00:21:01.020 [2024-07-25 14:09:10.079242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 14:09:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:21:01.020 [2024-07-25 14:09:10.094925] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x115aee0 was disconnected and freed. reset controller. 00:21:01.020 [2024-07-25 14:09:10.095035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.020 [2024-07-25 14:09:10.095047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.095055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.020 [2024-07-25 14:09:10.095060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.095066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.020 [2024-07-25 14:09:10.095072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.095078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.020 [2024-07-25 14:09:10.095083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.020 [2024-07-25 14:09:10.095089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ead40 is same with the state(5) to be set 00:21:01.020 [2024-07-25 14:09:10.095261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.020 [2024-07-25 14:09:10.095274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ead40 (9): Bad file descriptor 00:21:01.020 [2024-07-25 14:09:10.095352] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.020 [2024-07-25 14:09:10.095364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ead40 with addr=10.0.0.2, port=4420 00:21:01.020 [2024-07-25 14:09:10.095370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ead40 is same with the state(5) to be set 00:21:01.020 [2024-07-25 14:09:10.095380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ead40 (9): Bad file descriptor 00:21:01.020 [2024-07-25 14:09:10.095390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.020 [2024-07-25 14:09:10.095395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.020 [2024-07-25 14:09:10.095402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.020 [2024-07-25 14:09:10.095415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.020 [2024-07-25 14:09:10.095422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.955 [2024-07-25 14:09:11.093618] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.955 [2024-07-25 14:09:11.093680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ead40 with addr=10.0.0.2, port=4420 00:21:01.955 [2024-07-25 14:09:11.093690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ead40 is same with the state(5) to be set 00:21:01.955 [2024-07-25 14:09:11.093722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ead40 (9): Bad file descriptor 00:21:01.955 [2024-07-25 14:09:11.093734] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.955 [2024-07-25 14:09:11.093739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.955 [2024-07-25 14:09:11.093747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.955 [2024-07-25 14:09:11.093766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.955 [2024-07-25 14:09:11.093773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.955 14:09:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.213 [2024-07-25 14:09:11.269910] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.213 14:09:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81388 00:21:03.150 [2024-07-25 14:09:12.112071] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:09.721 00:21:09.721 Latency(us) 00:21:09.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.721 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:09.721 Verification LBA range: start 0x0 length 0x4000 00:21:09.721 NVMe0n1 : 10.01 7970.19 31.13 0.00 0.00 16031.32 1566.85 3033086.21 00:21:09.721 =================================================================================================================== 00:21:09.721 Total : 7970.19 31.13 0.00 0.00 16031.32 1566.85 3033086.21 00:21:09.721 0 00:21:09.721 14:09:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81498 00:21:09.721 14:09:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:09.721 14:09:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:21:09.984 Running I/O for 10 seconds... 00:21:10.919 14:09:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:10.919 [2024-07-25 14:09:20.177571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.177644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.177659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.177673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.177687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.177701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.177714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.177728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.177743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.177758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.177771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.177785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.177798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.177804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.178217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.178231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.178244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.178258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.178272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.178285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.178314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.920 [2024-07-25 14:09:20.178534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.178553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.920 [2024-07-25 14:09:20.178567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.920 [2024-07-25 14:09:20.178575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.178741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.178754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.178767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.178780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.178793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.178806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.178819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.178832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.178954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.178996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.179003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.179011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.179018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.179026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.179032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.179039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.179045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.179053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.921 [2024-07-25 14:09:20.179058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.179075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.179081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.179089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.179095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.179109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.179115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.179123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.179129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.179137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.179142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.921 [2024-07-25 14:09:20.179157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.921 [2024-07-25 14:09:20.179164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.922 [2024-07-25 14:09:20.179473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.922 [2024-07-25 14:09:20.179487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.922 [2024-07-25 14:09:20.179507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.922 [2024-07-25 14:09:20.179521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.922 [2024-07-25 14:09:20.179540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.922 [2024-07-25 14:09:20.179554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.922 [2024-07-25 14:09:20.179567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.922 [2024-07-25 14:09:20.179586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.922 [2024-07-25 14:09:20.179776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.922 [2024-07-25 14:09:20.179784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.923 [2024-07-25 14:09:20.179790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.923 [2024-07-25 14:09:20.179797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.923 [2024-07-25 14:09:20.179803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.923 [2024-07-25 14:09:20.179811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.923 [2024-07-25 14:09:20.179817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.923 [2024-07-25 14:09:20.179839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11602e0 is same with the state(5) to be set 00:21:10.923 [2024-07-25 14:09:20.179848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:10.923 [2024-07-25 14:09:20.179853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:10.923 [2024-07-25 14:09:20.179859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83064 len:8 PRP1 0x0 PRP2 0x0 00:21:10.923 [2024-07-25 14:09:20.179865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.923 [2024-07-25 14:09:20.179872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:10.923 [2024-07-25 14:09:20.179877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:10.923 [2024-07-25 14:09:20.179882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83520 len:8 PRP1 0x0 PRP2 0x0 00:21:10.923 [2024-07-25 14:09:20.179894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.923 [2024-07-25 14:09:20.179901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:10.923 [2024-07-25 14:09:20.179906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:10.923 [2024-07-25 14:09:20.179911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83528 len:8 PRP1 0x0 PRP2 0x0 00:21:10.923 [2024-07-25 14:09:20.179929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.923 [2024-07-25 14:09:20.179936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:10.923 [2024-07-25 14:09:20.179952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:10.923 [2024-07-25 14:09:20.179958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83536 len:8 PRP1 0x0 PRP2 0x0 00:21:10.923 [2024-07-25 14:09:20.179968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.923 [2024-07-25 14:09:20.179975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:10.923 [2024-07-25 14:09:20.179979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:10.923 [2024-07-25 14:09:20.179984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83544 len:8 PRP1 0x0 PRP2 0x0 00:21:10.923 [2024-07-25 14:09:20.179990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.923 [2024-07-25 14:09:20.180000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:10.923 [2024-07-25 14:09:20.180005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:10.923 [2024-07-25 14:09:20.180009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83552 len:8 PRP1 0x0 PRP2 0x0 00:21:10.923 [2024-07-25 14:09:20.180015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.923 [2024-07-25 14:09:20.180021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:10.923 [2024-07-25 14:09:20.180026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:10.923 [2024-07-25 14:09:20.180031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83560 len:8 PRP1 0x0 PRP2 0x0 00:21:10.923 [2024-07-25 14:09:20.180037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.923 [2024-07-25 14:09:20.180047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:10.923 [2024-07-25 14:09:20.180052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:10.923 [2024-07-25 14:09:20.180057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83568 len:8 PRP1 0x0 PRP2 0x0 00:21:10.923 [2024-07-25 14:09:20.180062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.923 [2024-07-25 14:09:20.180068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:10.923 [2024-07-25 14:09:20.180073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:10.923 [2024-07-25 14:09:20.180078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83576 len:8 PRP1 0x0 PRP2 0x0 00:21:10.923 [2024-07-25 14:09:20.180084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.923 [2024-07-25 14:09:20.180138] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11602e0 was disconnected and freed. reset controller. 00:21:10.923 [2024-07-25 14:09:20.180392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:10.923 [2024-07-25 14:09:20.180465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ead40 (9): Bad file descriptor 00:21:10.923 [2024-07-25 14:09:20.180542] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.923 [2024-07-25 14:09:20.180555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ead40 with addr=10.0.0.2, port=4420 00:21:10.923 [2024-07-25 14:09:20.180562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ead40 is same with the state(5) to be set 00:21:10.923 [2024-07-25 14:09:20.180573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ead40 (9): Bad file descriptor 00:21:10.923 [2024-07-25 14:09:20.180587] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:10.923 [2024-07-25 14:09:20.180593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:10.923 [2024-07-25 14:09:20.180600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:10.923 [2024-07-25 14:09:20.180616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.923 [2024-07-25 14:09:20.180623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:10.923 14:09:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:21:12.298 [2024-07-25 14:09:21.178832] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.298 [2024-07-25 14:09:21.178961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ead40 with addr=10.0.0.2, port=4420 00:21:12.298 [2024-07-25 14:09:21.179001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ead40 is same with the state(5) to be set 00:21:12.298 [2024-07-25 14:09:21.179042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ead40 (9): Bad file descriptor 00:21:12.298 [2024-07-25 14:09:21.179078] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:12.298 [2024-07-25 14:09:21.179130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:12.298 [2024-07-25 14:09:21.179174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:12.298 [2024-07-25 14:09:21.179225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.298 [2024-07-25 14:09:21.179264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:13.231 [2024-07-25 14:09:22.177508] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.231 [2024-07-25 14:09:22.177677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ead40 with addr=10.0.0.2, port=4420 00:21:13.231 [2024-07-25 14:09:22.177720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ead40 is same with the state(5) to be set 00:21:13.231 [2024-07-25 14:09:22.177765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ead40 (9): Bad file descriptor 00:21:13.231 [2024-07-25 14:09:22.177829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:13.231 [2024-07-25 14:09:22.177878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:13.231 [2024-07-25 14:09:22.177927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:13.231 [2024-07-25 14:09:22.178000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:13.231 [2024-07-25 14:09:22.178030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:14.165 [2024-07-25 14:09:23.178375] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.165 [2024-07-25 14:09:23.178482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ead40 with addr=10.0.0.2, port=4420 00:21:14.165 [2024-07-25 14:09:23.178528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ead40 is same with the state(5) to be set 00:21:14.165 [2024-07-25 14:09:23.178850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ead40 (9): Bad file descriptor 00:21:14.165 [2024-07-25 14:09:23.179109] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:14.165 [2024-07-25 14:09:23.179154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:14.165 [2024-07-25 14:09:23.179210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:14.165 [2024-07-25 14:09:23.182324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:14.165 [2024-07-25 14:09:23.182392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:14.165 14:09:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.165 [2024-07-25 14:09:23.396148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.165 14:09:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81498 00:21:15.099 [2024-07-25 14:09:24.211763] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:20.367 00:21:20.367 Latency(us) 00:21:20.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.367 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:20.367 Verification LBA range: start 0x0 length 0x4000 00:21:20.367 NVMe0n1 : 10.01 6726.28 26.27 4983.48 0.00 10911.99 522.28 3018433.62 00:21:20.367 =================================================================================================================== 00:21:20.367 Total : 6726.28 26.27 4983.48 0.00 10911.99 0.00 3018433.62 00:21:20.367 0 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81370 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81370 ']' 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81370 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81370 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81370' 00:21:20.367 killing process with pid 81370 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81370 00:21:20.367 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.367 00:21:20.367 Latency(us) 00:21:20.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.367 =================================================================================================================== 00:21:20.367 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81370 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81612 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81612 /var/tmp/bdevperf.sock 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81612 ']' 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.367 14:09:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:20.367 [2024-07-25 14:09:29.381959] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:21:20.367 [2024-07-25 14:09:29.382053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81612 ] 00:21:20.367 [2024-07-25 14:09:29.525277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.367 [2024-07-25 14:09:29.625466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.367 [2024-07-25 14:09:29.667731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:21.318 14:09:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.318 14:09:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:21:21.318 14:09:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81628 00:21:21.318 14:09:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81612 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:21.318 14:09:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:21.318 14:09:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:21.576 NVMe0n1 00:21:21.576 14:09:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81670 00:21:21.576 14:09:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:21.576 14:09:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:21.576 Running I/O for 10 seconds... 00:21:22.510 14:09:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.772 [2024-07-25 14:09:31.931756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.931999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.772 [2024-07-25 14:09:31.932152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6790 is same with the state(5) to be set 00:21:22.773 [2024-07-25 14:09:31.932477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.773 [2024-07-25 14:09:31.932724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.773 [2024-07-25 14:09:31.932731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.932992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.932999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.774 [2024-07-25 14:09:31.933403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.774 [2024-07-25 14:09:31.933411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.933989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.933997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.934004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.934012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.934018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.934026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.934032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.934041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.934048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.775 [2024-07-25 14:09:31.934056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.775 [2024-07-25 14:09:31.934070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.776 [2024-07-25 14:09:31.934617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.776 [2024-07-25 14:09:31.934624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.777 [2024-07-25 14:09:31.934631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.777 [2024-07-25 14:09:31.934638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18416a0 is same with the state(5) to be set 00:21:22.777 [2024-07-25 14:09:31.934646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:22.777 [2024-07-25 14:09:31.934651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:22.777 [2024-07-25 14:09:31.934657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59024 len:8 PRP1 0x0 PRP2 0x0 00:21:22.777 [2024-07-25 14:09:31.934663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.777 [2024-07-25 14:09:31.934714] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18416a0 was disconnected and freed. reset controller. 00:21:22.777 [2024-07-25 14:09:31.934983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.777 [2024-07-25 14:09:31.935050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f0c00 (9): Bad file descriptor 00:21:22.777 [2024-07-25 14:09:31.935129] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.777 [2024-07-25 14:09:31.935141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f0c00 with addr=10.0.0.2, port=4420 00:21:22.777 [2024-07-25 14:09:31.935148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0c00 is same with the state(5) to be set 00:21:22.777 [2024-07-25 14:09:31.935160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f0c00 (9): Bad file descriptor 00:21:22.777 [2024-07-25 14:09:31.935170] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.777 [2024-07-25 14:09:31.935176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.777 [2024-07-25 14:09:31.935183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.777 [2024-07-25 14:09:31.935200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.777 [2024-07-25 14:09:31.935208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.777 14:09:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81670 00:21:24.674 [2024-07-25 14:09:33.931586] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.674 [2024-07-25 14:09:33.931643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f0c00 with addr=10.0.0.2, port=4420 00:21:24.674 [2024-07-25 14:09:33.931653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0c00 is same with the state(5) to be set 00:21:24.674 [2024-07-25 14:09:33.931672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f0c00 (9): Bad file descriptor 00:21:24.674 [2024-07-25 14:09:33.931691] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.674 [2024-07-25 14:09:33.931696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.674 [2024-07-25 14:09:33.931703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.674 [2024-07-25 14:09:33.931726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.674 [2024-07-25 14:09:33.931733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:27.201 [2024-07-25 14:09:35.928120] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:27.201 [2024-07-25 14:09:35.928166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f0c00 with addr=10.0.0.2, port=4420 00:21:27.201 [2024-07-25 14:09:35.928176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0c00 is same with the state(5) to be set 00:21:27.201 [2024-07-25 14:09:35.928193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f0c00 (9): Bad file descriptor 00:21:27.201 [2024-07-25 14:09:35.928204] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:27.201 [2024-07-25 14:09:35.928210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:27.201 [2024-07-25 14:09:35.928217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:27.201 [2024-07-25 14:09:35.928240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:27.201 [2024-07-25 14:09:35.928247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.105 [2024-07-25 14:09:37.924580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.105 [2024-07-25 14:09:37.924623] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:29.105 [2024-07-25 14:09:37.924630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:29.105 [2024-07-25 14:09:37.924637] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:29.105 [2024-07-25 14:09:37.924658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:29.672 00:21:29.672 Latency(us) 00:21:29.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.672 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:29.672 NVMe0n1 : 8.10 2429.91 9.49 15.80 0.00 52380.63 6467.74 7033243.39 00:21:29.672 =================================================================================================================== 00:21:29.672 Total : 2429.91 9.49 15.80 0.00 52380.63 6467.74 7033243.39 00:21:29.672 0 00:21:29.672 14:09:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:29.672 Attaching 5 probes... 00:21:29.672 1121.316131: reset bdev controller NVMe0 00:21:29.672 1121.418411: reconnect bdev controller NVMe0 00:21:29.672 3117.818232: reconnect delay bdev controller NVMe0 00:21:29.672 3117.836294: reconnect bdev controller NVMe0 00:21:29.672 5114.366116: reconnect delay bdev controller NVMe0 00:21:29.672 5114.384690: reconnect bdev controller NVMe0 00:21:29.672 7110.900922: reconnect delay bdev controller NVMe0 00:21:29.672 7110.918525: reconnect bdev controller NVMe0 00:21:29.672 14:09:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:29.672 14:09:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:29.672 14:09:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81628 00:21:29.672 14:09:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:29.672 14:09:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81612 00:21:29.672 14:09:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81612 ']' 00:21:29.672 14:09:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81612 00:21:29.672 14:09:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:21:29.931 14:09:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:29.931 14:09:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81612 00:21:29.931 killing process with pid 81612 00:21:29.931 Received shutdown signal, test time was about 8.182271 seconds 00:21:29.931 00:21:29.931 Latency(us) 00:21:29.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.931 =================================================================================================================== 00:21:29.931 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.931 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:29.931 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:29.931 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81612' 00:21:29.931 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81612 00:21:29.931 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81612 00:21:29.931 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.188 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:30.188 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:30.188 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:30.188 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:21:30.188 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.189 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:21:30.189 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.189 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.189 rmmod nvme_tcp 00:21:30.189 rmmod nvme_fabrics 00:21:30.189 rmmod nvme_keyring 00:21:30.189 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81186 ']' 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81186 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81186 ']' 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81186 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81186 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:30.446 killing process with pid 81186 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81186' 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81186 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81186 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.446 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.705 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:30.705 00:21:30.705 real 0m45.415s 00:21:30.705 user 2m13.143s 00:21:30.705 sys 0m4.823s 00:21:30.705 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:30.705 14:09:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:30.705 ************************************ 00:21:30.705 END TEST nvmf_timeout 00:21:30.705 ************************************ 00:21:30.705 14:09:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:21:30.705 14:09:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:30.705 00:21:30.705 real 4m57.216s 00:21:30.705 user 12m55.032s 00:21:30.705 sys 1m3.270s 00:21:30.705 14:09:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:30.705 14:09:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.705 ************************************ 00:21:30.705 END TEST nvmf_host 00:21:30.705 ************************************ 00:21:30.705 ************************************ 00:21:30.705 END TEST nvmf_tcp 00:21:30.705 ************************************ 00:21:30.705 00:21:30.705 real 11m39.993s 00:21:30.705 user 28m21.051s 00:21:30.705 sys 2m45.975s 00:21:30.705 14:09:39 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:30.705 14:09:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.705 14:09:39 -- spdk/autotest.sh@292 -- # [[ 1 -eq 0 ]] 00:21:30.705 14:09:39 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:30.705 14:09:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:30.705 14:09:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:30.705 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:21:30.705 ************************************ 00:21:30.705 START TEST nvmf_dif 00:21:30.705 ************************************ 00:21:30.705 14:09:39 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:30.964 * Looking for test storage... 00:21:30.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:30.964 14:09:40 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:30.964 14:09:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:21:30.964 14:09:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.964 14:09:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.964 14:09:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.964 14:09:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.964 14:09:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.964 14:09:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.964 14:09:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.964 14:09:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.964 14:09:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.964 14:09:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:30.965 14:09:40 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.965 14:09:40 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.965 14:09:40 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.965 14:09:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.965 14:09:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.965 14:09:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.965 14:09:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:21:30.965 14:09:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:30.965 14:09:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:21:30.965 14:09:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:30.965 14:09:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:30.965 14:09:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:21:30.965 14:09:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.965 14:09:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:30.965 14:09:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:30.965 Cannot find device "nvmf_tgt_br" 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@155 -- # true 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:30.965 Cannot find device "nvmf_tgt_br2" 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@156 -- # true 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:30.965 Cannot find device "nvmf_tgt_br" 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@158 -- # true 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:30.965 Cannot find device "nvmf_tgt_br2" 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@159 -- # true 00:21:30.965 14:09:40 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@162 -- # true 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@163 -- # true 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:31.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:21:31.225 00:21:31.225 --- 10.0.0.2 ping statistics --- 00:21:31.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.225 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:31.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:31.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:21:31.225 00:21:31.225 --- 10.0.0.3 ping statistics --- 00:21:31.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.225 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:31.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:21:31.225 00:21:31.225 --- 10.0.0.1 ping statistics --- 00:21:31.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.225 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:31.225 14:09:40 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:31.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:31.794 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:31.794 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:31.794 14:09:40 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.794 14:09:40 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:31.794 14:09:40 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:31.794 14:09:40 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.794 14:09:40 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:31.794 14:09:40 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:31.794 14:09:40 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:31.794 14:09:40 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:21:31.794 14:09:40 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:31.794 14:09:40 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:31.794 14:09:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:31.794 14:09:41 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=82108 00:21:31.794 14:09:41 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:31.794 14:09:41 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 82108 00:21:31.794 14:09:41 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 82108 ']' 00:21:31.794 14:09:41 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.794 14:09:41 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:31.794 14:09:41 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.794 14:09:41 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:31.794 14:09:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:31.794 [2024-07-25 14:09:41.069262] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:21:31.794 [2024-07-25 14:09:41.069374] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.053 [2024-07-25 14:09:41.211914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.053 [2024-07-25 14:09:41.311232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.053 [2024-07-25 14:09:41.311287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.053 [2024-07-25 14:09:41.311294] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.053 [2024-07-25 14:09:41.311311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.053 [2024-07-25 14:09:41.311316] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.053 [2024-07-25 14:09:41.311337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.053 [2024-07-25 14:09:41.352652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:32.623 14:09:41 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:32.623 14:09:41 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:21:32.623 14:09:41 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:32.623 14:09:41 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:32.623 14:09:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:32.882 14:09:41 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.882 14:09:41 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:21:32.882 14:09:41 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:32.882 14:09:41 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.882 14:09:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:32.882 [2024-07-25 14:09:41.955148] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.882 14:09:41 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.882 14:09:41 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:32.882 14:09:41 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:32.882 14:09:41 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:32.882 14:09:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:32.882 ************************************ 00:21:32.882 START TEST fio_dif_1_default 00:21:32.882 ************************************ 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:32.882 bdev_null0 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.882 14:09:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:32.882 [2024-07-25 14:09:42.019126] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:32.882 14:09:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.882 { 00:21:32.882 "params": { 00:21:32.882 "name": "Nvme$subsystem", 00:21:32.882 "trtype": "$TEST_TRANSPORT", 00:21:32.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.883 "adrfam": "ipv4", 00:21:32.883 "trsvcid": "$NVMF_PORT", 00:21:32.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.883 "hdgst": ${hdgst:-false}, 00:21:32.883 "ddgst": ${ddgst:-false} 00:21:32.883 }, 00:21:32.883 "method": "bdev_nvme_attach_controller" 00:21:32.883 } 00:21:32.883 EOF 00:21:32.883 )") 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:32.883 "params": { 00:21:32.883 "name": "Nvme0", 00:21:32.883 "trtype": "tcp", 00:21:32.883 "traddr": "10.0.0.2", 00:21:32.883 "adrfam": "ipv4", 00:21:32.883 "trsvcid": "4420", 00:21:32.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:32.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:32.883 "hdgst": false, 00:21:32.883 "ddgst": false 00:21:32.883 }, 00:21:32.883 "method": "bdev_nvme_attach_controller" 00:21:32.883 }' 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:32.883 14:09:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:33.141 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:33.141 fio-3.35 00:21:33.141 Starting 1 thread 00:21:45.339 00:21:45.339 filename0: (groupid=0, jobs=1): err= 0: pid=82169: Thu Jul 25 14:09:52 2024 00:21:45.339 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(432MiB/10001msec) 00:21:45.339 slat (nsec): min=5489, max=61676, avg=6826.41, stdev=1491.63 00:21:45.339 clat (usec): min=270, max=2394, avg=342.61, stdev=36.16 00:21:45.339 lat (usec): min=275, max=2434, avg=349.44, stdev=36.80 00:21:45.339 clat percentiles (usec): 00:21:45.339 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 00:21:45.339 | 30.00th=[ 318], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 355], 00:21:45.339 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 383], 95.00th=[ 396], 00:21:45.339 | 99.00th=[ 441], 99.50th=[ 469], 99.90th=[ 506], 99.95th=[ 529], 00:21:45.339 | 99.99th=[ 635] 00:21:45.339 bw ( KiB/s): min=40064, max=49024, per=100.00%, avg=44394.63, stdev=2535.32, samples=19 00:21:45.339 iops : min=10016, max=12256, avg=11098.63, stdev=633.82, samples=19 00:21:45.339 lat (usec) : 500=99.88%, 750=0.12% 00:21:45.339 lat (msec) : 2=0.01%, 4=0.01% 00:21:45.339 cpu : usr=87.13%, sys=11.45%, ctx=129, majf=0, minf=0 00:21:45.339 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:45.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.339 issued rwts: total=110672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.339 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:45.339 00:21:45.339 Run status group 0 (all jobs): 00:21:45.339 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=432MiB (453MB), run=10001-10001msec 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.339 00:21:45.339 real 0m10.980s 00:21:45.339 user 0m9.344s 00:21:45.339 sys 0m1.427s 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:45.339 14:09:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 ************************************ 00:21:45.339 END TEST fio_dif_1_default 00:21:45.339 ************************************ 00:21:45.339 14:09:52 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:45.339 14:09:52 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:45.339 14:09:52 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:45.339 14:09:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 ************************************ 00:21:45.339 START TEST fio_dif_1_multi_subsystems 00:21:45.339 ************************************ 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 bdev_null0 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 [2024-07-25 14:09:53.054730] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 bdev_null1 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.339 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.340 { 00:21:45.340 "params": { 00:21:45.340 "name": "Nvme$subsystem", 00:21:45.340 "trtype": "$TEST_TRANSPORT", 00:21:45.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.340 "adrfam": "ipv4", 00:21:45.340 "trsvcid": "$NVMF_PORT", 00:21:45.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.340 "hdgst": ${hdgst:-false}, 00:21:45.340 "ddgst": ${ddgst:-false} 00:21:45.340 }, 00:21:45.340 "method": "bdev_nvme_attach_controller" 00:21:45.340 } 00:21:45.340 EOF 00:21:45.340 )") 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.340 { 00:21:45.340 "params": { 00:21:45.340 "name": "Nvme$subsystem", 00:21:45.340 "trtype": "$TEST_TRANSPORT", 00:21:45.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.340 "adrfam": "ipv4", 00:21:45.340 "trsvcid": "$NVMF_PORT", 00:21:45.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.340 "hdgst": ${hdgst:-false}, 00:21:45.340 "ddgst": ${ddgst:-false} 00:21:45.340 }, 00:21:45.340 "method": "bdev_nvme_attach_controller" 00:21:45.340 } 00:21:45.340 EOF 00:21:45.340 )") 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:45.340 "params": { 00:21:45.340 "name": "Nvme0", 00:21:45.340 "trtype": "tcp", 00:21:45.340 "traddr": "10.0.0.2", 00:21:45.340 "adrfam": "ipv4", 00:21:45.340 "trsvcid": "4420", 00:21:45.340 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:45.340 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:45.340 "hdgst": false, 00:21:45.340 "ddgst": false 00:21:45.340 }, 00:21:45.340 "method": "bdev_nvme_attach_controller" 00:21:45.340 },{ 00:21:45.340 "params": { 00:21:45.340 "name": "Nvme1", 00:21:45.340 "trtype": "tcp", 00:21:45.340 "traddr": "10.0.0.2", 00:21:45.340 "adrfam": "ipv4", 00:21:45.340 "trsvcid": "4420", 00:21:45.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.340 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.340 "hdgst": false, 00:21:45.340 "ddgst": false 00:21:45.340 }, 00:21:45.340 "method": "bdev_nvme_attach_controller" 00:21:45.340 }' 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:45.340 14:09:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:45.340 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:45.340 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:45.340 fio-3.35 00:21:45.340 Starting 2 threads 00:21:55.325 00:21:55.325 filename0: (groupid=0, jobs=1): err= 0: pid=82333: Thu Jul 25 14:10:03 2024 00:21:55.325 read: IOPS=5729, BW=22.4MiB/s (23.5MB/s)(224MiB/10001msec) 00:21:55.325 slat (nsec): min=5697, max=57003, avg=12606.57, stdev=3179.45 00:21:55.325 clat (usec): min=494, max=2876, avg=665.45, stdev=65.97 00:21:55.325 lat (usec): min=500, max=2908, avg=678.06, stdev=67.09 00:21:55.325 clat percentiles (usec): 00:21:55.325 | 1.00th=[ 545], 5.00th=[ 570], 10.00th=[ 586], 20.00th=[ 603], 00:21:55.325 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 668], 60.00th=[ 685], 00:21:55.325 | 70.00th=[ 701], 80.00th=[ 717], 90.00th=[ 742], 95.00th=[ 758], 00:21:55.325 | 99.00th=[ 832], 99.50th=[ 873], 99.90th=[ 938], 99.95th=[ 971], 00:21:55.325 | 99.99th=[ 1090] 00:21:55.325 bw ( KiB/s): min=20512, max=25472, per=50.16%, avg=22991.16, stdev=1513.85, samples=19 00:21:55.325 iops : min= 5128, max= 6368, avg=5747.79, stdev=378.46, samples=19 00:21:55.325 lat (usec) : 500=0.01%, 750=92.98%, 1000=6.99% 00:21:55.325 lat (msec) : 2=0.02%, 4=0.01% 00:21:55.325 cpu : usr=92.96%, sys=6.12%, ctx=14, majf=0, minf=9 00:21:55.325 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:55.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.325 issued rwts: total=57304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.325 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:55.325 filename1: (groupid=0, jobs=1): err= 0: pid=82334: Thu Jul 25 14:10:03 2024 00:21:55.325 read: IOPS=5729, BW=22.4MiB/s (23.5MB/s)(224MiB/10001msec) 00:21:55.325 slat (usec): min=5, max=116, avg=12.87, stdev= 3.46 00:21:55.325 clat (usec): min=504, max=2646, avg=663.99, stdev=64.51 00:21:55.325 lat (usec): min=510, max=2668, avg=676.86, stdev=65.56 00:21:55.325 clat percentiles (usec): 00:21:55.325 | 1.00th=[ 553], 5.00th=[ 570], 10.00th=[ 586], 20.00th=[ 603], 00:21:55.325 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 668], 60.00th=[ 685], 00:21:55.325 | 70.00th=[ 701], 80.00th=[ 717], 90.00th=[ 742], 95.00th=[ 750], 00:21:55.325 | 99.00th=[ 832], 99.50th=[ 881], 99.90th=[ 938], 99.95th=[ 979], 00:21:55.325 | 99.99th=[ 1090] 00:21:55.325 bw ( KiB/s): min=20544, max=25472, per=50.15%, avg=22990.74, stdev=1512.27, samples=19 00:21:55.325 iops : min= 5136, max= 6368, avg=5747.68, stdev=378.06, samples=19 00:21:55.325 lat (usec) : 750=94.22%, 1000=5.75% 00:21:55.325 lat (msec) : 2=0.03%, 4=0.01% 00:21:55.325 cpu : usr=90.16%, sys=8.67%, ctx=131, majf=0, minf=0 00:21:55.325 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:55.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.325 issued rwts: total=57304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.325 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:55.325 00:21:55.325 Run status group 0 (all jobs): 00:21:55.325 READ: bw=44.8MiB/s (46.9MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=448MiB (469MB), run=10001-10001msec 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.325 00:21:55.325 real 0m11.096s 00:21:55.325 user 0m19.063s 00:21:55.325 sys 0m1.759s 00:21:55.325 ************************************ 00:21:55.325 END TEST fio_dif_1_multi_subsystems 00:21:55.325 ************************************ 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:55.325 14:10:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:55.325 14:10:04 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:55.325 14:10:04 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:55.325 14:10:04 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:55.325 14:10:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:55.325 ************************************ 00:21:55.325 START TEST fio_dif_rand_params 00:21:55.325 ************************************ 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.325 bdev_null0 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.325 [2024-07-25 14:10:04.204312] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.325 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.325 { 00:21:55.325 "params": { 00:21:55.325 "name": "Nvme$subsystem", 00:21:55.325 "trtype": "$TEST_TRANSPORT", 00:21:55.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.325 "adrfam": "ipv4", 00:21:55.325 "trsvcid": "$NVMF_PORT", 00:21:55.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.326 "hdgst": ${hdgst:-false}, 00:21:55.326 "ddgst": ${ddgst:-false} 00:21:55.326 }, 00:21:55.326 "method": "bdev_nvme_attach_controller" 00:21:55.326 } 00:21:55.326 EOF 00:21:55.326 )") 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:55.326 "params": { 00:21:55.326 "name": "Nvme0", 00:21:55.326 "trtype": "tcp", 00:21:55.326 "traddr": "10.0.0.2", 00:21:55.326 "adrfam": "ipv4", 00:21:55.326 "trsvcid": "4420", 00:21:55.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:55.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:55.326 "hdgst": false, 00:21:55.326 "ddgst": false 00:21:55.326 }, 00:21:55.326 "method": "bdev_nvme_attach_controller" 00:21:55.326 }' 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:55.326 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:55.326 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:55.326 ... 00:21:55.326 fio-3.35 00:21:55.326 Starting 3 threads 00:22:01.896 00:22:01.896 filename0: (groupid=0, jobs=1): err= 0: pid=82490: Thu Jul 25 14:10:09 2024 00:22:01.896 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(186MiB/5004msec) 00:22:01.896 slat (nsec): min=6448, max=56013, avg=15729.25, stdev=4043.79 00:22:01.896 clat (usec): min=6319, max=11839, avg=10053.13, stdev=522.98 00:22:01.896 lat (usec): min=6332, max=11857, avg=10068.85, stdev=523.56 00:22:01.896 clat percentiles (usec): 00:22:01.896 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[ 9634], 00:22:01.896 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:22:01.896 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10552], 95.00th=[10814], 00:22:01.896 | 99.00th=[11076], 99.50th=[11600], 99.90th=[11863], 99.95th=[11863], 00:22:01.896 | 99.99th=[11863] 00:22:01.896 bw ( KiB/s): min=36096, max=39936, per=33.29%, avg=38016.00, stdev=1039.88, samples=10 00:22:01.896 iops : min= 282, max= 312, avg=297.00, stdev= 8.12, samples=10 00:22:01.896 lat (msec) : 10=36.56%, 20=63.44% 00:22:01.896 cpu : usr=94.18%, sys=5.36%, ctx=44, majf=0, minf=0 00:22:01.896 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:01.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.896 issued rwts: total=1488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.896 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:01.896 filename0: (groupid=0, jobs=1): err= 0: pid=82491: Thu Jul 25 14:10:09 2024 00:22:01.896 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(186MiB/5001msec) 00:22:01.896 slat (nsec): min=6155, max=41245, avg=14955.27, stdev=4583.48 00:22:01.896 clat (usec): min=4363, max=11893, avg=10049.35, stdev=554.88 00:22:01.896 lat (usec): min=4374, max=11920, avg=10064.30, stdev=555.37 00:22:01.896 clat percentiles (usec): 00:22:01.896 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[ 9634], 00:22:01.896 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:22:01.896 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10552], 95.00th=[10814], 00:22:01.896 | 99.00th=[11076], 99.50th=[11207], 99.90th=[11863], 99.95th=[11863], 00:22:01.897 | 99.99th=[11863] 00:22:01.897 bw ( KiB/s): min=36864, max=39168, per=33.18%, avg=37888.00, stdev=665.11, samples=9 00:22:01.897 iops : min= 288, max= 306, avg=296.00, stdev= 5.20, samples=9 00:22:01.897 lat (msec) : 10=36.16%, 20=63.84% 00:22:01.897 cpu : usr=91.54%, sys=7.94%, ctx=70, majf=0, minf=0 00:22:01.897 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:01.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.897 issued rwts: total=1488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.897 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:01.897 filename0: (groupid=0, jobs=1): err= 0: pid=82492: Thu Jul 25 14:10:09 2024 00:22:01.897 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(186MiB/5004msec) 00:22:01.897 slat (nsec): min=6529, max=45925, avg=15620.34, stdev=3810.79 00:22:01.897 clat (usec): min=6322, max=11834, avg=10052.22, stdev=522.37 00:22:01.897 lat (usec): min=6334, max=11859, avg=10067.84, stdev=522.87 00:22:01.897 clat percentiles (usec): 00:22:01.897 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[ 9634], 00:22:01.897 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:22:01.897 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10552], 95.00th=[10814], 00:22:01.897 | 99.00th=[11076], 99.50th=[11469], 99.90th=[11863], 99.95th=[11863], 00:22:01.897 | 99.99th=[11863] 00:22:01.897 bw ( KiB/s): min=36096, max=39936, per=33.29%, avg=38016.00, stdev=1039.88, samples=10 00:22:01.897 iops : min= 282, max= 312, avg=297.00, stdev= 8.12, samples=10 00:22:01.897 lat (msec) : 10=36.29%, 20=63.71% 00:22:01.897 cpu : usr=93.22%, sys=6.32%, ctx=7, majf=0, minf=9 00:22:01.897 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:01.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.897 issued rwts: total=1488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.897 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:01.897 00:22:01.897 Run status group 0 (all jobs): 00:22:01.897 READ: bw=112MiB/s (117MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=558MiB (585MB), run=5001-5004msec 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 bdev_null0 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 [2024-07-25 14:10:10.194266] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 bdev_null1 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 bdev_null2 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:01.897 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.898 { 00:22:01.898 "params": { 00:22:01.898 "name": "Nvme$subsystem", 00:22:01.898 "trtype": "$TEST_TRANSPORT", 00:22:01.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.898 "adrfam": "ipv4", 00:22:01.898 "trsvcid": "$NVMF_PORT", 00:22:01.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.898 "hdgst": ${hdgst:-false}, 00:22:01.898 "ddgst": ${ddgst:-false} 00:22:01.898 }, 00:22:01.898 "method": "bdev_nvme_attach_controller" 00:22:01.898 } 00:22:01.898 EOF 00:22:01.898 )") 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.898 { 00:22:01.898 "params": { 00:22:01.898 "name": "Nvme$subsystem", 00:22:01.898 "trtype": "$TEST_TRANSPORT", 00:22:01.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.898 "adrfam": "ipv4", 00:22:01.898 "trsvcid": "$NVMF_PORT", 00:22:01.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.898 "hdgst": ${hdgst:-false}, 00:22:01.898 "ddgst": ${ddgst:-false} 00:22:01.898 }, 00:22:01.898 "method": "bdev_nvme_attach_controller" 00:22:01.898 } 00:22:01.898 EOF 00:22:01.898 )") 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.898 { 00:22:01.898 "params": { 00:22:01.898 "name": "Nvme$subsystem", 00:22:01.898 "trtype": "$TEST_TRANSPORT", 00:22:01.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.898 "adrfam": "ipv4", 00:22:01.898 "trsvcid": "$NVMF_PORT", 00:22:01.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.898 "hdgst": ${hdgst:-false}, 00:22:01.898 "ddgst": ${ddgst:-false} 00:22:01.898 }, 00:22:01.898 "method": "bdev_nvme_attach_controller" 00:22:01.898 } 00:22:01.898 EOF 00:22:01.898 )") 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:01.898 "params": { 00:22:01.898 "name": "Nvme0", 00:22:01.898 "trtype": "tcp", 00:22:01.898 "traddr": "10.0.0.2", 00:22:01.898 "adrfam": "ipv4", 00:22:01.898 "trsvcid": "4420", 00:22:01.898 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:01.898 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:01.898 "hdgst": false, 00:22:01.898 "ddgst": false 00:22:01.898 }, 00:22:01.898 "method": "bdev_nvme_attach_controller" 00:22:01.898 },{ 00:22:01.898 "params": { 00:22:01.898 "name": "Nvme1", 00:22:01.898 "trtype": "tcp", 00:22:01.898 "traddr": "10.0.0.2", 00:22:01.898 "adrfam": "ipv4", 00:22:01.898 "trsvcid": "4420", 00:22:01.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.898 "hdgst": false, 00:22:01.898 "ddgst": false 00:22:01.898 }, 00:22:01.898 "method": "bdev_nvme_attach_controller" 00:22:01.898 },{ 00:22:01.898 "params": { 00:22:01.898 "name": "Nvme2", 00:22:01.898 "trtype": "tcp", 00:22:01.898 "traddr": "10.0.0.2", 00:22:01.898 "adrfam": "ipv4", 00:22:01.898 "trsvcid": "4420", 00:22:01.898 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:01.898 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:01.898 "hdgst": false, 00:22:01.898 "ddgst": false 00:22:01.898 }, 00:22:01.898 "method": "bdev_nvme_attach_controller" 00:22:01.898 }' 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:01.898 14:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:01.898 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:01.898 ... 00:22:01.898 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:01.898 ... 00:22:01.898 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:01.898 ... 00:22:01.898 fio-3.35 00:22:01.898 Starting 24 threads 00:22:14.115 00:22:14.115 filename0: (groupid=0, jobs=1): err= 0: pid=82592: Thu Jul 25 14:10:21 2024 00:22:14.115 read: IOPS=200, BW=802KiB/s (822kB/s)(8044KiB/10024msec) 00:22:14.115 slat (usec): min=4, max=13992, avg=30.67, stdev=378.31 00:22:14.115 clat (msec): min=23, max=142, avg=79.49, stdev=19.86 00:22:14.115 lat (msec): min=23, max=142, avg=79.52, stdev=19.86 00:22:14.115 clat percentiles (msec): 00:22:14.115 | 1.00th=[ 38], 5.00th=[ 46], 10.00th=[ 54], 20.00th=[ 62], 00:22:14.115 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 88], 00:22:14.115 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 106], 95.00th=[ 108], 00:22:14.115 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 126], 99.95th=[ 130], 00:22:14.115 | 99.99th=[ 144] 00:22:14.115 bw ( KiB/s): min= 656, max= 1144, per=4.33%, avg=798.00, stdev=105.84, samples=20 00:22:14.115 iops : min= 164, max= 286, avg=199.50, stdev=26.46, samples=20 00:22:14.115 lat (msec) : 50=7.86%, 100=76.18%, 250=15.96% 00:22:14.115 cpu : usr=33.08%, sys=1.41%, ctx=1407, majf=0, minf=9 00:22:14.115 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:22:14.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.115 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.115 issued rwts: total=2011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.115 filename0: (groupid=0, jobs=1): err= 0: pid=82593: Thu Jul 25 14:10:21 2024 00:22:14.115 read: IOPS=196, BW=786KiB/s (805kB/s)(7916KiB/10067msec) 00:22:14.115 slat (nsec): min=6318, max=96437, avg=13222.34, stdev=5921.58 00:22:14.115 clat (msec): min=5, max=140, avg=81.20, stdev=22.36 00:22:14.115 lat (msec): min=5, max=140, avg=81.21, stdev=22.36 00:22:14.115 clat percentiles (msec): 00:22:14.115 | 1.00th=[ 8], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 63], 00:22:14.115 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 92], 00:22:14.115 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 107], 95.00th=[ 110], 00:22:14.115 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 142], 00:22:14.115 | 99.99th=[ 142] 00:22:14.115 bw ( KiB/s): min= 640, max= 1408, per=4.25%, avg=784.90, stdev=162.59, samples=20 00:22:14.115 iops : min= 160, max= 352, avg=196.20, stdev=40.63, samples=20 00:22:14.115 lat (msec) : 10=1.62%, 20=0.81%, 50=4.65%, 100=71.65%, 250=21.27% 00:22:14.115 cpu : usr=38.03%, sys=1.77%, ctx=1233, majf=0, minf=0 00:22:14.115 IO depths : 1=0.2%, 2=1.5%, 4=5.4%, 8=77.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:22:14.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.115 complete : 0=0.0%, 4=88.9%, 8=9.9%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.115 issued rwts: total=1979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.115 filename0: (groupid=0, jobs=1): err= 0: pid=82594: Thu Jul 25 14:10:21 2024 00:22:14.115 read: IOPS=165, BW=661KiB/s (677kB/s)(6640KiB/10039msec) 00:22:14.115 slat (nsec): min=6384, max=44793, avg=13138.14, stdev=5641.77 00:22:14.115 clat (msec): min=39, max=146, avg=96.55, stdev=21.71 00:22:14.115 lat (msec): min=39, max=146, avg=96.56, stdev=21.71 00:22:14.115 clat percentiles (msec): 00:22:14.115 | 1.00th=[ 47], 5.00th=[ 56], 10.00th=[ 64], 20.00th=[ 83], 00:22:14.115 | 30.00th=[ 87], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 105], 00:22:14.116 | 70.00th=[ 108], 80.00th=[ 115], 90.00th=[ 125], 95.00th=[ 131], 00:22:14.116 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:22:14.116 | 99.99th=[ 146] 00:22:14.116 bw ( KiB/s): min= 512, max= 1008, per=3.58%, avg=660.00, stdev=121.51, samples=20 00:22:14.116 iops : min= 128, max= 252, avg=165.00, stdev=30.38, samples=20 00:22:14.116 lat (msec) : 50=2.65%, 100=52.23%, 250=45.12% 00:22:14.116 cpu : usr=32.39%, sys=1.54%, ctx=1149, majf=0, minf=9 00:22:14.116 IO depths : 1=0.1%, 2=3.7%, 4=15.5%, 8=66.1%, 16=14.6%, 32=0.0%, >=64=0.0% 00:22:14.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 complete : 0=0.0%, 4=92.1%, 8=4.4%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 issued rwts: total=1660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.116 filename0: (groupid=0, jobs=1): err= 0: pid=82595: Thu Jul 25 14:10:21 2024 00:22:14.116 read: IOPS=204, BW=819KiB/s (839kB/s)(8196KiB/10003msec) 00:22:14.116 slat (usec): min=4, max=8038, avg=27.79, stdev=279.79 00:22:14.116 clat (msec): min=20, max=141, avg=77.99, stdev=19.80 00:22:14.116 lat (msec): min=20, max=141, avg=78.01, stdev=19.80 00:22:14.116 clat percentiles (msec): 00:22:14.116 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 61], 00:22:14.116 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 78], 60.00th=[ 85], 00:22:14.116 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:22:14.116 | 99.00th=[ 121], 99.50th=[ 127], 99.90th=[ 127], 99.95th=[ 142], 00:22:14.116 | 99.99th=[ 142] 00:22:14.116 bw ( KiB/s): min= 720, max= 1040, per=4.42%, avg=815.58, stdev=70.93, samples=19 00:22:14.116 iops : min= 180, max= 260, avg=203.89, stdev=17.73, samples=19 00:22:14.116 lat (msec) : 50=8.64%, 100=76.28%, 250=15.08% 00:22:14.116 cpu : usr=43.71%, sys=1.97%, ctx=1364, majf=0, minf=9 00:22:14.116 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:22:14.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.116 filename0: (groupid=0, jobs=1): err= 0: pid=82596: Thu Jul 25 14:10:21 2024 00:22:14.116 read: IOPS=203, BW=814KiB/s (834kB/s)(8160KiB/10019msec) 00:22:14.116 slat (usec): min=4, max=4029, avg=24.96, stdev=186.16 00:22:14.116 clat (msec): min=23, max=152, avg=78.47, stdev=20.04 00:22:14.116 lat (msec): min=23, max=152, avg=78.49, stdev=20.05 00:22:14.116 clat percentiles (msec): 00:22:14.116 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 61], 00:22:14.116 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 85], 00:22:14.116 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:22:14.116 | 99.00th=[ 121], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 153], 00:22:14.116 | 99.99th=[ 153] 00:22:14.116 bw ( KiB/s): min= 744, max= 912, per=4.39%, avg=809.25, stdev=58.26, samples=20 00:22:14.116 iops : min= 186, max= 228, avg=202.30, stdev=14.57, samples=20 00:22:14.116 lat (msec) : 50=8.24%, 100=75.78%, 250=15.98% 00:22:14.116 cpu : usr=42.51%, sys=1.91%, ctx=1353, majf=0, minf=9 00:22:14.116 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:22:14.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.116 filename0: (groupid=0, jobs=1): err= 0: pid=82597: Thu Jul 25 14:10:21 2024 00:22:14.116 read: IOPS=191, BW=766KiB/s (784kB/s)(7676KiB/10025msec) 00:22:14.116 slat (usec): min=4, max=8022, avg=19.77, stdev=182.85 00:22:14.116 clat (msec): min=33, max=129, avg=83.42, stdev=18.64 00:22:14.116 lat (msec): min=33, max=129, avg=83.44, stdev=18.64 00:22:14.116 clat percentiles (msec): 00:22:14.116 | 1.00th=[ 46], 5.00th=[ 54], 10.00th=[ 59], 20.00th=[ 65], 00:22:14.116 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 92], 00:22:14.116 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 107], 95.00th=[ 112], 00:22:14.116 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 130], 99.95th=[ 130], 00:22:14.116 | 99.99th=[ 130] 00:22:14.116 bw ( KiB/s): min= 528, max= 1008, per=4.14%, avg=763.60, stdev=90.04, samples=20 00:22:14.116 iops : min= 132, max= 252, avg=190.90, stdev=22.51, samples=20 00:22:14.116 lat (msec) : 50=2.87%, 100=76.45%, 250=20.69% 00:22:14.116 cpu : usr=35.23%, sys=1.47%, ctx=1046, majf=0, minf=9 00:22:14.116 IO depths : 1=0.1%, 2=1.8%, 4=7.2%, 8=75.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:22:14.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 complete : 0=0.0%, 4=89.1%, 8=9.3%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 issued rwts: total=1919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.116 filename0: (groupid=0, jobs=1): err= 0: pid=82598: Thu Jul 25 14:10:21 2024 00:22:14.116 read: IOPS=194, BW=779KiB/s (798kB/s)(7800KiB/10011msec) 00:22:14.116 slat (usec): min=3, max=4035, avg=22.53, stdev=157.48 00:22:14.116 clat (msec): min=22, max=123, avg=82.00, stdev=18.70 00:22:14.116 lat (msec): min=22, max=123, avg=82.03, stdev=18.70 00:22:14.116 clat percentiles (msec): 00:22:14.116 | 1.00th=[ 45], 5.00th=[ 52], 10.00th=[ 58], 20.00th=[ 64], 00:22:14.116 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 90], 00:22:14.116 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 106], 95.00th=[ 109], 00:22:14.116 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:22:14.116 | 99.99th=[ 124] 00:22:14.116 bw ( KiB/s): min= 640, max= 896, per=4.19%, avg=773.60, stdev=61.48, samples=20 00:22:14.116 iops : min= 160, max= 224, avg=193.40, stdev=15.37, samples=20 00:22:14.116 lat (msec) : 50=4.46%, 100=75.90%, 250=19.64% 00:22:14.116 cpu : usr=41.37%, sys=1.95%, ctx=1146, majf=0, minf=9 00:22:14.116 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=76.2%, 16=14.9%, 32=0.0%, >=64=0.0% 00:22:14.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 complete : 0=0.0%, 4=88.9%, 8=9.5%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 issued rwts: total=1950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.116 filename0: (groupid=0, jobs=1): err= 0: pid=82599: Thu Jul 25 14:10:21 2024 00:22:14.116 read: IOPS=189, BW=758KiB/s (777kB/s)(7608KiB/10031msec) 00:22:14.116 slat (usec): min=4, max=11995, avg=26.68, stdev=342.67 00:22:14.116 clat (msec): min=32, max=139, avg=84.16, stdev=17.60 00:22:14.116 lat (msec): min=33, max=139, avg=84.19, stdev=17.60 00:22:14.116 clat percentiles (msec): 00:22:14.116 | 1.00th=[ 45], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 68], 00:22:14.116 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 92], 00:22:14.116 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 107], 95.00th=[ 110], 00:22:14.116 | 99.00th=[ 117], 99.50th=[ 123], 99.90th=[ 131], 99.95th=[ 140], 00:22:14.116 | 99.99th=[ 140] 00:22:14.116 bw ( KiB/s): min= 664, max= 896, per=4.09%, avg=754.40, stdev=64.63, samples=20 00:22:14.116 iops : min= 166, max= 224, avg=188.60, stdev=16.16, samples=20 00:22:14.116 lat (msec) : 50=2.73%, 100=78.13%, 250=19.14% 00:22:14.116 cpu : usr=33.40%, sys=1.12%, ctx=1397, majf=0, minf=9 00:22:14.116 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=80.9%, 16=16.7%, 32=0.0%, >=64=0.0% 00:22:14.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 issued rwts: total=1902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.116 filename1: (groupid=0, jobs=1): err= 0: pid=82600: Thu Jul 25 14:10:21 2024 00:22:14.116 read: IOPS=189, BW=758KiB/s (777kB/s)(7612KiB/10037msec) 00:22:14.116 slat (usec): min=6, max=8011, avg=21.86, stdev=241.45 00:22:14.116 clat (msec): min=39, max=136, avg=84.13, stdev=17.73 00:22:14.116 lat (msec): min=39, max=136, avg=84.15, stdev=17.72 00:22:14.116 clat percentiles (msec): 00:22:14.116 | 1.00th=[ 47], 5.00th=[ 55], 10.00th=[ 59], 20.00th=[ 68], 00:22:14.116 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 93], 00:22:14.116 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 106], 95.00th=[ 110], 00:22:14.116 | 99.00th=[ 114], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 138], 00:22:14.116 | 99.99th=[ 138] 00:22:14.116 bw ( KiB/s): min= 656, max= 1026, per=4.11%, avg=757.70, stdev=85.67, samples=20 00:22:14.116 iops : min= 164, max= 256, avg=189.40, stdev=21.34, samples=20 00:22:14.116 lat (msec) : 50=2.52%, 100=78.88%, 250=18.60% 00:22:14.116 cpu : usr=36.00%, sys=1.67%, ctx=1113, majf=0, minf=9 00:22:14.116 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=77.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:22:14.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 complete : 0=0.0%, 4=89.0%, 8=9.8%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 issued rwts: total=1903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.116 filename1: (groupid=0, jobs=1): err= 0: pid=82601: Thu Jul 25 14:10:21 2024 00:22:14.116 read: IOPS=191, BW=765KiB/s (784kB/s)(7672KiB/10025msec) 00:22:14.116 slat (usec): min=3, max=8025, avg=26.52, stdev=241.99 00:22:14.116 clat (msec): min=36, max=142, avg=83.42, stdev=18.25 00:22:14.116 lat (msec): min=36, max=142, avg=83.45, stdev=18.25 00:22:14.116 clat percentiles (msec): 00:22:14.116 | 1.00th=[ 45], 5.00th=[ 54], 10.00th=[ 60], 20.00th=[ 66], 00:22:14.116 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 91], 00:22:14.116 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 106], 95.00th=[ 110], 00:22:14.116 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:22:14.116 | 99.99th=[ 144] 00:22:14.116 bw ( KiB/s): min= 640, max= 897, per=4.14%, avg=763.25, stdev=62.43, samples=20 00:22:14.116 iops : min= 160, max= 224, avg=190.80, stdev=15.58, samples=20 00:22:14.116 lat (msec) : 50=1.98%, 100=76.80%, 250=21.22% 00:22:14.116 cpu : usr=42.58%, sys=1.59%, ctx=1193, majf=0, minf=9 00:22:14.116 IO depths : 1=0.1%, 2=1.8%, 4=7.0%, 8=76.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:22:14.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 complete : 0=0.0%, 4=89.0%, 8=9.5%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 issued rwts: total=1918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.116 filename1: (groupid=0, jobs=1): err= 0: pid=82602: Thu Jul 25 14:10:21 2024 00:22:14.116 read: IOPS=194, BW=779KiB/s (798kB/s)(7796KiB/10010msec) 00:22:14.116 slat (usec): min=4, max=8013, avg=20.52, stdev=181.23 00:22:14.116 clat (msec): min=21, max=139, avg=82.06, stdev=19.07 00:22:14.116 lat (msec): min=21, max=139, avg=82.08, stdev=19.07 00:22:14.116 clat percentiles (msec): 00:22:14.116 | 1.00th=[ 45], 5.00th=[ 51], 10.00th=[ 57], 20.00th=[ 64], 00:22:14.116 | 30.00th=[ 70], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 90], 00:22:14.116 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 107], 95.00th=[ 111], 00:22:14.116 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 140], 99.95th=[ 140], 00:22:14.116 | 99.99th=[ 140] 00:22:14.116 bw ( KiB/s): min= 640, max= 1008, per=4.19%, avg=773.20, stdev=78.99, samples=20 00:22:14.116 iops : min= 160, max= 252, avg=193.30, stdev=19.75, samples=20 00:22:14.116 lat (msec) : 50=4.21%, 100=76.24%, 250=19.55% 00:22:14.116 cpu : usr=43.26%, sys=1.59%, ctx=1208, majf=0, minf=9 00:22:14.116 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=76.1%, 16=14.9%, 32=0.0%, >=64=0.0% 00:22:14.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 complete : 0=0.0%, 4=88.8%, 8=9.6%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.116 issued rwts: total=1949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.116 filename1: (groupid=0, jobs=1): err= 0: pid=82603: Thu Jul 25 14:10:21 2024 00:22:14.116 read: IOPS=193, BW=772KiB/s (791kB/s)(7748KiB/10036msec) 00:22:14.116 slat (usec): min=3, max=8036, avg=21.07, stdev=203.78 00:22:14.116 clat (msec): min=36, max=149, avg=82.72, stdev=18.63 00:22:14.116 lat (msec): min=36, max=149, avg=82.74, stdev=18.63 00:22:14.116 clat percentiles (msec): 00:22:14.116 | 1.00th=[ 46], 5.00th=[ 51], 10.00th=[ 58], 20.00th=[ 64], 00:22:14.116 | 30.00th=[ 71], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 91], 00:22:14.116 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 106], 95.00th=[ 110], 00:22:14.117 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 144], 99.95th=[ 150], 00:22:14.117 | 99.99th=[ 150] 00:22:14.117 bw ( KiB/s): min= 664, max= 1008, per=4.18%, avg=771.20, stdev=88.53, samples=20 00:22:14.117 iops : min= 166, max= 252, avg=192.80, stdev=22.13, samples=20 00:22:14.117 lat (msec) : 50=4.96%, 100=76.82%, 250=18.22% 00:22:14.117 cpu : usr=35.63%, sys=1.54%, ctx=990, majf=0, minf=9 00:22:14.117 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=80.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:22:14.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 issued rwts: total=1937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.117 filename1: (groupid=0, jobs=1): err= 0: pid=82604: Thu Jul 25 14:10:21 2024 00:22:14.117 read: IOPS=201, BW=806KiB/s (825kB/s)(8068KiB/10015msec) 00:22:14.117 slat (usec): min=5, max=8043, avg=43.00, stdev=373.23 00:22:14.117 clat (msec): min=23, max=121, avg=79.23, stdev=18.68 00:22:14.117 lat (msec): min=23, max=121, avg=79.28, stdev=18.67 00:22:14.117 clat percentiles (msec): 00:22:14.117 | 1.00th=[ 44], 5.00th=[ 50], 10.00th=[ 56], 20.00th=[ 62], 00:22:14.117 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 86], 00:22:14.117 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 104], 95.00th=[ 107], 00:22:14.117 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:22:14.117 | 99.99th=[ 123] 00:22:14.117 bw ( KiB/s): min= 656, max= 920, per=4.34%, avg=800.00, stdev=64.32, samples=20 00:22:14.117 iops : min= 164, max= 230, avg=200.00, stdev=16.08, samples=20 00:22:14.117 lat (msec) : 50=6.00%, 100=77.89%, 250=16.11% 00:22:14.117 cpu : usr=43.36%, sys=1.90%, ctx=1323, majf=0, minf=9 00:22:14.117 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=81.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:22:14.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 issued rwts: total=2017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.117 filename1: (groupid=0, jobs=1): err= 0: pid=82605: Thu Jul 25 14:10:21 2024 00:22:14.117 read: IOPS=171, BW=687KiB/s (703kB/s)(6892KiB/10037msec) 00:22:14.117 slat (usec): min=3, max=8022, avg=27.49, stdev=286.12 00:22:14.117 clat (msec): min=38, max=155, avg=92.96, stdev=17.00 00:22:14.117 lat (msec): min=38, max=155, avg=92.99, stdev=17.00 00:22:14.117 clat percentiles (msec): 00:22:14.117 | 1.00th=[ 47], 5.00th=[ 63], 10.00th=[ 72], 20.00th=[ 81], 00:22:14.117 | 30.00th=[ 85], 40.00th=[ 90], 50.00th=[ 94], 60.00th=[ 96], 00:22:14.117 | 70.00th=[ 102], 80.00th=[ 108], 90.00th=[ 112], 95.00th=[ 122], 00:22:14.117 | 99.00th=[ 130], 99.50th=[ 140], 99.90th=[ 155], 99.95th=[ 155], 00:22:14.117 | 99.99th=[ 155] 00:22:14.117 bw ( KiB/s): min= 512, max= 897, per=3.70%, avg=682.85, stdev=89.60, samples=20 00:22:14.117 iops : min= 128, max= 224, avg=170.70, stdev=22.37, samples=20 00:22:14.117 lat (msec) : 50=1.74%, 100=64.54%, 250=33.72% 00:22:14.117 cpu : usr=39.00%, sys=1.70%, ctx=1148, majf=0, minf=9 00:22:14.117 IO depths : 1=0.1%, 2=5.3%, 4=21.1%, 8=60.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:22:14.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 complete : 0=0.0%, 4=93.3%, 8=2.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 issued rwts: total=1723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.117 filename1: (groupid=0, jobs=1): err= 0: pid=82606: Thu Jul 25 14:10:21 2024 00:22:14.117 read: IOPS=190, BW=763KiB/s (782kB/s)(7664KiB/10042msec) 00:22:14.117 slat (usec): min=5, max=8022, avg=22.79, stdev=216.01 00:22:14.117 clat (msec): min=22, max=138, avg=83.63, stdev=18.56 00:22:14.117 lat (msec): min=22, max=138, avg=83.66, stdev=18.56 00:22:14.117 clat percentiles (msec): 00:22:14.117 | 1.00th=[ 36], 5.00th=[ 53], 10.00th=[ 61], 20.00th=[ 69], 00:22:14.117 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 93], 00:22:14.117 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 106], 95.00th=[ 110], 00:22:14.117 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 138], 00:22:14.117 | 99.99th=[ 138] 00:22:14.117 bw ( KiB/s): min= 640, max= 1024, per=4.12%, avg=760.00, stdev=81.46, samples=20 00:22:14.117 iops : min= 160, max= 256, avg=190.00, stdev=20.37, samples=20 00:22:14.117 lat (msec) : 50=4.44%, 100=75.57%, 250=19.99% 00:22:14.117 cpu : usr=35.31%, sys=1.49%, ctx=962, majf=0, minf=9 00:22:14.117 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.3%, 16=16.5%, 32=0.0%, >=64=0.0% 00:22:14.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 complete : 0=0.0%, 4=88.4%, 8=11.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.117 filename1: (groupid=0, jobs=1): err= 0: pid=82607: Thu Jul 25 14:10:21 2024 00:22:14.117 read: IOPS=188, BW=752KiB/s (770kB/s)(7568KiB/10058msec) 00:22:14.117 slat (usec): min=4, max=8030, avg=28.04, stdev=285.55 00:22:14.117 clat (msec): min=2, max=143, avg=84.81, stdev=23.16 00:22:14.117 lat (msec): min=2, max=143, avg=84.83, stdev=23.17 00:22:14.117 clat percentiles (msec): 00:22:14.117 | 1.00th=[ 6], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 69], 00:22:14.117 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 90], 60.00th=[ 93], 00:22:14.117 | 70.00th=[ 97], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 116], 00:22:14.117 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:22:14.117 | 99.99th=[ 144] 00:22:14.117 bw ( KiB/s): min= 640, max= 1523, per=4.06%, avg=749.45, stdev=189.45, samples=20 00:22:14.117 iops : min= 160, max= 380, avg=187.30, stdev=47.21, samples=20 00:22:14.117 lat (msec) : 4=0.85%, 10=1.69%, 50=2.70%, 100=68.87%, 250=25.90% 00:22:14.117 cpu : usr=40.35%, sys=1.59%, ctx=1134, majf=0, minf=9 00:22:14.117 IO depths : 1=0.2%, 2=2.4%, 4=9.6%, 8=72.6%, 16=15.2%, 32=0.0%, >=64=0.0% 00:22:14.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 complete : 0=0.0%, 4=90.2%, 8=7.7%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.117 filename2: (groupid=0, jobs=1): err= 0: pid=82608: Thu Jul 25 14:10:21 2024 00:22:14.117 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10040msec) 00:22:14.117 slat (usec): min=5, max=4058, avg=25.06, stdev=205.20 00:22:14.117 clat (msec): min=39, max=129, avg=84.11, stdev=18.17 00:22:14.117 lat (msec): min=39, max=129, avg=84.14, stdev=18.17 00:22:14.117 clat percentiles (msec): 00:22:14.117 | 1.00th=[ 48], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 66], 00:22:14.117 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 92], 00:22:14.117 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 107], 95.00th=[ 110], 00:22:14.117 | 99.00th=[ 127], 99.50th=[ 127], 99.90th=[ 130], 99.95th=[ 130], 00:22:14.117 | 99.99th=[ 130] 00:22:14.117 bw ( KiB/s): min= 640, max= 1024, per=4.11%, avg=757.60, stdev=83.59, samples=20 00:22:14.117 iops : min= 160, max= 256, avg=189.40, stdev=20.90, samples=20 00:22:14.117 lat (msec) : 50=2.36%, 100=75.84%, 250=21.80% 00:22:14.117 cpu : usr=43.68%, sys=1.83%, ctx=1525, majf=0, minf=9 00:22:14.117 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=75.6%, 16=15.2%, 32=0.0%, >=64=0.0% 00:22:14.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 complete : 0=0.0%, 4=89.2%, 8=9.2%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.117 filename2: (groupid=0, jobs=1): err= 0: pid=82609: Thu Jul 25 14:10:21 2024 00:22:14.117 read: IOPS=199, BW=800KiB/s (819kB/s)(8012KiB/10021msec) 00:22:14.117 slat (usec): min=6, max=9043, avg=28.94, stdev=318.91 00:22:14.117 clat (msec): min=22, max=124, avg=79.96, stdev=18.54 00:22:14.117 lat (msec): min=22, max=124, avg=79.98, stdev=18.56 00:22:14.117 clat percentiles (msec): 00:22:14.117 | 1.00th=[ 40], 5.00th=[ 49], 10.00th=[ 57], 20.00th=[ 64], 00:22:14.117 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 86], 00:22:14.117 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 105], 95.00th=[ 108], 00:22:14.117 | 99.00th=[ 116], 99.50th=[ 117], 99.90th=[ 125], 99.95th=[ 125], 00:22:14.117 | 99.99th=[ 125] 00:22:14.117 bw ( KiB/s): min= 696, max= 888, per=4.31%, avg=794.80, stdev=59.03, samples=20 00:22:14.117 iops : min= 174, max= 222, avg=198.70, stdev=14.76, samples=20 00:22:14.117 lat (msec) : 50=5.74%, 100=78.03%, 250=16.23% 00:22:14.117 cpu : usr=32.19%, sys=1.30%, ctx=1191, majf=0, minf=0 00:22:14.117 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=82.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:22:14.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 issued rwts: total=2003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.117 filename2: (groupid=0, jobs=1): err= 0: pid=82610: Thu Jul 25 14:10:21 2024 00:22:14.117 read: IOPS=190, BW=763KiB/s (782kB/s)(7660KiB/10033msec) 00:22:14.117 slat (usec): min=6, max=5819, avg=21.27, stdev=161.23 00:22:14.117 clat (msec): min=38, max=146, avg=83.67, stdev=18.82 00:22:14.117 lat (msec): min=38, max=146, avg=83.69, stdev=18.83 00:22:14.117 clat percentiles (msec): 00:22:14.117 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 67], 00:22:14.117 | 30.00th=[ 71], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 93], 00:22:14.117 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 107], 95.00th=[ 111], 00:22:14.117 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 144], 99.95th=[ 148], 00:22:14.117 | 99.99th=[ 148] 00:22:14.117 bw ( KiB/s): min= 624, max= 1024, per=4.12%, avg=759.60, stdev=93.60, samples=20 00:22:14.117 iops : min= 156, max= 256, avg=189.90, stdev=23.40, samples=20 00:22:14.117 lat (msec) : 50=5.12%, 100=74.99%, 250=19.90% 00:22:14.117 cpu : usr=38.07%, sys=1.56%, ctx=1044, majf=0, minf=9 00:22:14.117 IO depths : 1=0.1%, 2=1.7%, 4=6.8%, 8=76.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:22:14.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 complete : 0=0.0%, 4=89.1%, 8=9.4%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 issued rwts: total=1915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.117 filename2: (groupid=0, jobs=1): err= 0: pid=82611: Thu Jul 25 14:10:21 2024 00:22:14.117 read: IOPS=187, BW=749KiB/s (767kB/s)(7532KiB/10050msec) 00:22:14.117 slat (nsec): min=5116, max=52903, avg=14827.94, stdev=5840.74 00:22:14.117 clat (msec): min=7, max=148, avg=85.23, stdev=20.31 00:22:14.117 lat (msec): min=7, max=149, avg=85.24, stdev=20.31 00:22:14.117 clat percentiles (msec): 00:22:14.117 | 1.00th=[ 25], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 71], 00:22:14.117 | 30.00th=[ 80], 40.00th=[ 84], 50.00th=[ 89], 60.00th=[ 94], 00:22:14.117 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 108], 95.00th=[ 111], 00:22:14.117 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 140], 99.95th=[ 150], 00:22:14.117 | 99.99th=[ 150] 00:22:14.117 bw ( KiB/s): min= 632, max= 1336, per=4.05%, avg=746.85, stdev=151.51, samples=20 00:22:14.117 iops : min= 158, max= 334, avg=186.70, stdev=37.87, samples=20 00:22:14.117 lat (msec) : 10=0.85%, 50=5.74%, 100=71.91%, 250=21.51% 00:22:14.117 cpu : usr=31.61%, sys=1.26%, ctx=1111, majf=0, minf=9 00:22:14.117 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=81.2%, 16=17.6%, 32=0.0%, >=64=0.0% 00:22:14.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 complete : 0=0.0%, 4=88.6%, 8=11.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.117 issued rwts: total=1883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.117 filename2: (groupid=0, jobs=1): err= 0: pid=82612: Thu Jul 25 14:10:21 2024 00:22:14.117 read: IOPS=199, BW=799KiB/s (819kB/s)(8028KiB/10042msec) 00:22:14.117 slat (usec): min=6, max=8016, avg=19.24, stdev=178.70 00:22:14.117 clat (msec): min=27, max=132, avg=79.87, stdev=19.58 00:22:14.117 lat (msec): min=27, max=132, avg=79.89, stdev=19.59 00:22:14.117 clat percentiles (msec): 00:22:14.117 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 63], 00:22:14.117 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 89], 00:22:14.117 | 70.00th=[ 94], 80.00th=[ 100], 90.00th=[ 105], 95.00th=[ 108], 00:22:14.117 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 124], 99.95th=[ 126], 00:22:14.117 | 99.99th=[ 133] 00:22:14.117 bw ( KiB/s): min= 672, max= 1170, per=4.32%, avg=796.50, stdev=107.02, samples=20 00:22:14.117 iops : min= 168, max= 292, avg=199.10, stdev=26.66, samples=20 00:22:14.118 lat (msec) : 50=8.82%, 100=74.79%, 250=16.39% 00:22:14.118 cpu : usr=42.49%, sys=1.75%, ctx=1361, majf=0, minf=9 00:22:14.118 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:22:14.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.118 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.118 issued rwts: total=2007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.118 filename2: (groupid=0, jobs=1): err= 0: pid=82613: Thu Jul 25 14:10:21 2024 00:22:14.118 read: IOPS=197, BW=789KiB/s (808kB/s)(7904KiB/10013msec) 00:22:14.118 slat (usec): min=3, max=8030, avg=23.90, stdev=254.91 00:22:14.118 clat (msec): min=18, max=144, avg=80.98, stdev=19.42 00:22:14.118 lat (msec): min=18, max=144, avg=81.00, stdev=19.43 00:22:14.118 clat percentiles (msec): 00:22:14.118 | 1.00th=[ 41], 5.00th=[ 49], 10.00th=[ 57], 20.00th=[ 62], 00:22:14.118 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 89], 00:22:14.118 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 110], 00:22:14.118 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 128], 99.95th=[ 144], 00:22:14.118 | 99.99th=[ 144] 00:22:14.118 bw ( KiB/s): min= 696, max= 892, per=4.25%, avg=783.80, stdev=56.34, samples=20 00:22:14.118 iops : min= 174, max= 223, avg=195.95, stdev=14.08, samples=20 00:22:14.118 lat (msec) : 20=0.30%, 50=5.47%, 100=77.02%, 250=17.21% 00:22:14.118 cpu : usr=34.17%, sys=1.43%, ctx=944, majf=0, minf=9 00:22:14.118 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:22:14.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.118 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.118 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.118 filename2: (groupid=0, jobs=1): err= 0: pid=82614: Thu Jul 25 14:10:21 2024 00:22:14.118 read: IOPS=199, BW=797KiB/s (816kB/s)(8012KiB/10051msec) 00:22:14.118 slat (usec): min=4, max=8039, avg=24.07, stdev=261.96 00:22:14.118 clat (usec): min=1954, max=145961, avg=80046.86, stdev=23350.72 00:22:14.118 lat (usec): min=1964, max=145977, avg=80070.93, stdev=23354.82 00:22:14.118 clat percentiles (msec): 00:22:14.118 | 1.00th=[ 3], 5.00th=[ 36], 10.00th=[ 50], 20.00th=[ 63], 00:22:14.118 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 92], 00:22:14.118 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 106], 95.00th=[ 109], 00:22:14.118 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 144], 00:22:14.118 | 99.99th=[ 146] 00:22:14.118 bw ( KiB/s): min= 656, max= 1688, per=4.31%, avg=794.60, stdev=219.96, samples=20 00:22:14.118 iops : min= 164, max= 422, avg=198.65, stdev=54.99, samples=20 00:22:14.118 lat (msec) : 2=0.45%, 4=0.95%, 10=0.90%, 20=0.10%, 50=8.44% 00:22:14.118 lat (msec) : 100=70.89%, 250=18.27% 00:22:14.118 cpu : usr=37.03%, sys=1.50%, ctx=1061, majf=0, minf=9 00:22:14.118 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.5%, 16=16.7%, 32=0.0%, >=64=0.0% 00:22:14.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.118 complete : 0=0.0%, 4=88.1%, 8=11.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.118 issued rwts: total=2003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.118 filename2: (groupid=0, jobs=1): err= 0: pid=82615: Thu Jul 25 14:10:21 2024 00:22:14.118 read: IOPS=194, BW=780KiB/s (798kB/s)(7816KiB/10025msec) 00:22:14.118 slat (usec): min=3, max=8060, avg=29.68, stdev=300.77 00:22:14.118 clat (msec): min=38, max=147, avg=81.90, stdev=18.14 00:22:14.118 lat (msec): min=38, max=147, avg=81.93, stdev=18.13 00:22:14.118 clat percentiles (msec): 00:22:14.118 | 1.00th=[ 46], 5.00th=[ 54], 10.00th=[ 59], 20.00th=[ 64], 00:22:14.118 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 89], 00:22:14.118 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 110], 00:22:14.118 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 148], 99.95th=[ 148], 00:22:14.118 | 99.99th=[ 148] 00:22:14.118 bw ( KiB/s): min= 624, max= 896, per=4.20%, avg=775.20, stdev=68.62, samples=20 00:22:14.118 iops : min= 156, max= 224, avg=193.80, stdev=17.15, samples=20 00:22:14.118 lat (msec) : 50=4.04%, 100=78.56%, 250=17.40% 00:22:14.118 cpu : usr=36.67%, sys=1.69%, ctx=1112, majf=0, minf=9 00:22:14.118 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:22:14.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.118 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.118 issued rwts: total=1954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:14.118 00:22:14.118 Run status group 0 (all jobs): 00:22:14.118 READ: bw=18.0MiB/s (18.9MB/s), 661KiB/s-819KiB/s (677kB/s-839kB/s), io=181MiB (190MB), run=10003-10067msec 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 bdev_null0 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 [2024-07-25 14:10:21.541057] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 bdev_null1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:14.118 14:10:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:14.119 { 00:22:14.119 "params": { 00:22:14.119 "name": "Nvme$subsystem", 00:22:14.119 "trtype": "$TEST_TRANSPORT", 00:22:14.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.119 "adrfam": "ipv4", 00:22:14.119 "trsvcid": "$NVMF_PORT", 00:22:14.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.119 "hdgst": ${hdgst:-false}, 00:22:14.119 "ddgst": ${ddgst:-false} 00:22:14.119 }, 00:22:14.119 "method": "bdev_nvme_attach_controller" 00:22:14.119 } 00:22:14.119 EOF 00:22:14.119 )") 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:14.119 { 00:22:14.119 "params": { 00:22:14.119 "name": "Nvme$subsystem", 00:22:14.119 "trtype": "$TEST_TRANSPORT", 00:22:14.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.119 "adrfam": "ipv4", 00:22:14.119 "trsvcid": "$NVMF_PORT", 00:22:14.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.119 "hdgst": ${hdgst:-false}, 00:22:14.119 "ddgst": ${ddgst:-false} 00:22:14.119 }, 00:22:14.119 "method": "bdev_nvme_attach_controller" 00:22:14.119 } 00:22:14.119 EOF 00:22:14.119 )") 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:14.119 "params": { 00:22:14.119 "name": "Nvme0", 00:22:14.119 "trtype": "tcp", 00:22:14.119 "traddr": "10.0.0.2", 00:22:14.119 "adrfam": "ipv4", 00:22:14.119 "trsvcid": "4420", 00:22:14.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.119 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:14.119 "hdgst": false, 00:22:14.119 "ddgst": false 00:22:14.119 }, 00:22:14.119 "method": "bdev_nvme_attach_controller" 00:22:14.119 },{ 00:22:14.119 "params": { 00:22:14.119 "name": "Nvme1", 00:22:14.119 "trtype": "tcp", 00:22:14.119 "traddr": "10.0.0.2", 00:22:14.119 "adrfam": "ipv4", 00:22:14.119 "trsvcid": "4420", 00:22:14.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:14.119 "hdgst": false, 00:22:14.119 "ddgst": false 00:22:14.119 }, 00:22:14.119 "method": "bdev_nvme_attach_controller" 00:22:14.119 }' 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:14.119 14:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:14.119 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:14.119 ... 00:22:14.119 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:14.119 ... 00:22:14.119 fio-3.35 00:22:14.119 Starting 4 threads 00:22:18.335 00:22:18.335 filename0: (groupid=0, jobs=1): err= 0: pid=82767: Thu Jul 25 14:10:27 2024 00:22:18.335 read: IOPS=2555, BW=20.0MiB/s (20.9MB/s)(99.9MiB/5001msec) 00:22:18.335 slat (nsec): min=5795, max=39086, avg=12500.62, stdev=3497.12 00:22:18.335 clat (usec): min=764, max=5587, avg=3095.39, stdev=920.76 00:22:18.335 lat (usec): min=771, max=5600, avg=3107.89, stdev=920.48 00:22:18.335 clat percentiles (usec): 00:22:18.335 | 1.00th=[ 1647], 5.00th=[ 1745], 10.00th=[ 1860], 20.00th=[ 1991], 00:22:18.336 | 30.00th=[ 2212], 40.00th=[ 2737], 50.00th=[ 3490], 60.00th=[ 3687], 00:22:18.336 | 70.00th=[ 3818], 80.00th=[ 3949], 90.00th=[ 4080], 95.00th=[ 4178], 00:22:18.336 | 99.00th=[ 4686], 99.50th=[ 5145], 99.90th=[ 5473], 99.95th=[ 5538], 00:22:18.336 | 99.99th=[ 5604] 00:22:18.336 bw ( KiB/s): min=15104, max=24064, per=27.06%, avg=20926.67, stdev=3156.97, samples=9 00:22:18.336 iops : min= 1888, max= 3008, avg=2615.78, stdev=394.61, samples=9 00:22:18.336 lat (usec) : 1000=0.14% 00:22:18.336 lat (msec) : 2=20.31%, 4=64.38%, 10=15.17% 00:22:18.336 cpu : usr=94.44%, sys=4.84%, ctx=104, majf=0, minf=0 00:22:18.336 IO depths : 1=0.1%, 2=6.1%, 4=60.4%, 8=33.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:18.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.336 complete : 0=0.0%, 4=97.7%, 8=2.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.336 issued rwts: total=12782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.336 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:18.336 filename0: (groupid=0, jobs=1): err= 0: pid=82768: Thu Jul 25 14:10:27 2024 00:22:18.336 read: IOPS=2255, BW=17.6MiB/s (18.5MB/s)(88.1MiB/5001msec) 00:22:18.336 slat (nsec): min=6401, max=42421, avg=13943.82, stdev=2875.15 00:22:18.336 clat (usec): min=649, max=5672, avg=3500.16, stdev=751.74 00:22:18.336 lat (usec): min=664, max=5703, avg=3514.10, stdev=751.94 00:22:18.336 clat percentiles (usec): 00:22:18.336 | 1.00th=[ 1631], 5.00th=[ 1893], 10.00th=[ 2073], 20.00th=[ 3097], 00:22:18.336 | 30.00th=[ 3523], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3851], 00:22:18.336 | 70.00th=[ 3949], 80.00th=[ 4015], 90.00th=[ 4146], 95.00th=[ 4228], 00:22:18.336 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5080], 99.95th=[ 5407], 00:22:18.336 | 99.99th=[ 5407] 00:22:18.336 bw ( KiB/s): min=15775, max=21952, per=22.72%, avg=17567.89, stdev=2359.28, samples=9 00:22:18.336 iops : min= 1971, max= 2744, avg=2195.89, stdev=294.99, samples=9 00:22:18.336 lat (usec) : 750=0.01% 00:22:18.336 lat (msec) : 2=8.39%, 4=68.71%, 10=22.89% 00:22:18.336 cpu : usr=94.88%, sys=4.46%, ctx=15, majf=0, minf=0 00:22:18.336 IO depths : 1=0.1%, 2=15.8%, 4=55.1%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:18.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.336 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.336 issued rwts: total=11282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.336 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:18.336 filename1: (groupid=0, jobs=1): err= 0: pid=82769: Thu Jul 25 14:10:27 2024 00:22:18.336 read: IOPS=2601, BW=20.3MiB/s (21.3MB/s)(102MiB/5004msec) 00:22:18.336 slat (nsec): min=5677, max=41508, avg=12395.18, stdev=3512.92 00:22:18.336 clat (usec): min=753, max=6208, avg=3042.32, stdev=898.59 00:22:18.336 lat (usec): min=760, max=6225, avg=3054.72, stdev=899.10 00:22:18.336 clat percentiles (usec): 00:22:18.336 | 1.00th=[ 1500], 5.00th=[ 1729], 10.00th=[ 1844], 20.00th=[ 1991], 00:22:18.336 | 30.00th=[ 2212], 40.00th=[ 2638], 50.00th=[ 3425], 60.00th=[ 3621], 00:22:18.336 | 70.00th=[ 3785], 80.00th=[ 3916], 90.00th=[ 4047], 95.00th=[ 4146], 00:22:18.336 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 4555], 99.95th=[ 4817], 00:22:18.336 | 99.99th=[ 4883] 00:22:18.336 bw ( KiB/s): min=16000, max=24272, per=27.59%, avg=21335.33, stdev=2578.43, samples=9 00:22:18.336 iops : min= 2000, max= 3034, avg=2666.89, stdev=322.30, samples=9 00:22:18.336 lat (usec) : 1000=0.13% 00:22:18.336 lat (msec) : 2=20.93%, 4=66.04%, 10=12.90% 00:22:18.336 cpu : usr=93.86%, sys=5.48%, ctx=11, majf=0, minf=0 00:22:18.336 IO depths : 1=0.1%, 2=5.1%, 4=61.0%, 8=33.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:18.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.336 complete : 0=0.0%, 4=98.1%, 8=1.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.336 issued rwts: total=13020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.336 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:18.336 filename1: (groupid=0, jobs=1): err= 0: pid=82770: Thu Jul 25 14:10:27 2024 00:22:18.336 read: IOPS=2256, BW=17.6MiB/s (18.5MB/s)(88.2MiB/5002msec) 00:22:18.336 slat (nsec): min=6777, max=43742, avg=13832.79, stdev=2826.16 00:22:18.336 clat (usec): min=982, max=5199, avg=3500.55, stdev=748.62 00:22:18.336 lat (usec): min=995, max=5214, avg=3514.38, stdev=748.97 00:22:18.336 clat percentiles (usec): 00:22:18.336 | 1.00th=[ 1631], 5.00th=[ 1893], 10.00th=[ 2073], 20.00th=[ 3097], 00:22:18.336 | 30.00th=[ 3523], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3851], 00:22:18.336 | 70.00th=[ 3949], 80.00th=[ 4015], 90.00th=[ 4146], 95.00th=[ 4228], 00:22:18.336 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 5014], 99.95th=[ 5080], 00:22:18.336 | 99.99th=[ 5145] 00:22:18.336 bw ( KiB/s): min=15840, max=21952, per=22.72%, avg=17571.78, stdev=2345.72, samples=9 00:22:18.336 iops : min= 1980, max= 2744, avg=2196.44, stdev=293.22, samples=9 00:22:18.336 lat (usec) : 1000=0.02% 00:22:18.336 lat (msec) : 2=8.48%, 4=68.60%, 10=22.90% 00:22:18.336 cpu : usr=94.04%, sys=5.28%, ctx=72, majf=0, minf=0 00:22:18.336 IO depths : 1=0.1%, 2=15.8%, 4=55.1%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:18.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.336 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.336 issued rwts: total=11285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.336 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:18.336 00:22:18.336 Run status group 0 (all jobs): 00:22:18.336 READ: bw=75.5MiB/s (79.2MB/s), 17.6MiB/s-20.3MiB/s (18.5MB/s-21.3MB/s), io=378MiB (396MB), run=5001-5004msec 00:22:18.336 14:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:18.336 14:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:18.336 14:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:18.336 14:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:18.336 14:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:18.336 14:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:18.336 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.336 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.336 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.336 14:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:18.336 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.336 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.596 ************************************ 00:22:18.596 END TEST fio_dif_rand_params 00:22:18.596 ************************************ 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.596 00:22:18.596 real 0m23.500s 00:22:18.596 user 2m6.606s 00:22:18.596 sys 0m6.783s 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.596 14:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.596 14:10:27 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:18.596 14:10:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:18.596 14:10:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.596 14:10:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:18.596 ************************************ 00:22:18.596 START TEST fio_dif_digest 00:22:18.596 ************************************ 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:18.596 bdev_null0 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.596 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:18.596 [2024-07-25 14:10:27.784667] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:18.597 { 00:22:18.597 "params": { 00:22:18.597 "name": "Nvme$subsystem", 00:22:18.597 "trtype": "$TEST_TRANSPORT", 00:22:18.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.597 "adrfam": "ipv4", 00:22:18.597 "trsvcid": "$NVMF_PORT", 00:22:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.597 "hdgst": ${hdgst:-false}, 00:22:18.597 "ddgst": ${ddgst:-false} 00:22:18.597 }, 00:22:18.597 "method": "bdev_nvme_attach_controller" 00:22:18.597 } 00:22:18.597 EOF 00:22:18.597 )") 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:18.597 "params": { 00:22:18.597 "name": "Nvme0", 00:22:18.597 "trtype": "tcp", 00:22:18.597 "traddr": "10.0.0.2", 00:22:18.597 "adrfam": "ipv4", 00:22:18.597 "trsvcid": "4420", 00:22:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:18.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:18.597 "hdgst": true, 00:22:18.597 "ddgst": true 00:22:18.597 }, 00:22:18.597 "method": "bdev_nvme_attach_controller" 00:22:18.597 }' 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:18.597 14:10:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:18.857 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:18.857 ... 00:22:18.857 fio-3.35 00:22:18.857 Starting 3 threads 00:22:31.066 00:22:31.066 filename0: (groupid=0, jobs=1): err= 0: pid=82876: Thu Jul 25 14:10:38 2024 00:22:31.066 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(326MiB/10005msec) 00:22:31.066 slat (nsec): min=6166, max=69720, avg=10523.19, stdev=5272.66 00:22:31.066 clat (usec): min=9771, max=13454, avg=11499.84, stdev=957.51 00:22:31.066 lat (usec): min=9781, max=13470, avg=11510.36, stdev=959.08 00:22:31.066 clat percentiles (usec): 00:22:31.067 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10290], 20.00th=[10421], 00:22:31.067 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11469], 60.00th=[11863], 00:22:31.067 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12649], 95.00th=[12780], 00:22:31.067 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13435], 99.95th=[13435], 00:22:31.067 | 99.99th=[13435] 00:22:31.067 bw ( KiB/s): min=29952, max=36864, per=33.24%, avg=33226.11, stdev=2557.98, samples=19 00:22:31.067 iops : min= 234, max= 288, avg=259.58, stdev=19.98, samples=19 00:22:31.067 lat (msec) : 10=1.15%, 20=98.85% 00:22:31.067 cpu : usr=93.79%, sys=5.74%, ctx=103, majf=0, minf=0 00:22:31.067 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:31.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.067 issued rwts: total=2604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.067 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:31.067 filename0: (groupid=0, jobs=1): err= 0: pid=82877: Thu Jul 25 14:10:38 2024 00:22:31.067 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(326MiB/10005msec) 00:22:31.067 slat (nsec): min=6154, max=58017, avg=10458.47, stdev=4932.23 00:22:31.067 clat (usec): min=8219, max=13727, avg=11500.03, stdev=962.54 00:22:31.067 lat (usec): min=8227, max=13742, avg=11510.49, stdev=963.96 00:22:31.067 clat percentiles (usec): 00:22:31.067 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10290], 20.00th=[10421], 00:22:31.067 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11338], 60.00th=[11994], 00:22:31.067 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12649], 95.00th=[12780], 00:22:31.067 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13698], 99.95th=[13698], 00:22:31.067 | 99.99th=[13698] 00:22:31.067 bw ( KiB/s): min=29952, max=36864, per=33.24%, avg=33222.37, stdev=2630.62, samples=19 00:22:31.067 iops : min= 234, max= 288, avg=259.53, stdev=20.53, samples=19 00:22:31.067 lat (msec) : 10=0.96%, 20=99.04% 00:22:31.067 cpu : usr=94.04%, sys=5.53%, ctx=12, majf=0, minf=0 00:22:31.067 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:31.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.067 issued rwts: total=2604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.067 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:31.067 filename0: (groupid=0, jobs=1): err= 0: pid=82878: Thu Jul 25 14:10:38 2024 00:22:31.067 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(326MiB/10004msec) 00:22:31.067 slat (nsec): min=6029, max=42098, avg=9823.05, stdev=3537.12 00:22:31.067 clat (usec): min=6462, max=14642, avg=11501.47, stdev=975.79 00:22:31.067 lat (usec): min=6469, max=14685, avg=11511.29, stdev=976.77 00:22:31.067 clat percentiles (usec): 00:22:31.067 | 1.00th=[ 9896], 5.00th=[10159], 10.00th=[10290], 20.00th=[10421], 00:22:31.067 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11338], 60.00th=[11863], 00:22:31.067 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12649], 95.00th=[12780], 00:22:31.067 | 99.00th=[12911], 99.50th=[13042], 99.90th=[14615], 99.95th=[14615], 00:22:31.067 | 99.99th=[14615] 00:22:31.067 bw ( KiB/s): min=29952, max=36096, per=33.24%, avg=33225.53, stdev=2601.44, samples=19 00:22:31.067 iops : min= 234, max= 282, avg=259.53, stdev=20.34, samples=19 00:22:31.067 lat (msec) : 10=1.50%, 20=98.50% 00:22:31.067 cpu : usr=94.29%, sys=5.30%, ctx=14, majf=0, minf=0 00:22:31.067 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:31.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.067 issued rwts: total=2604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.067 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:31.067 00:22:31.067 Run status group 0 (all jobs): 00:22:31.067 READ: bw=97.6MiB/s (102MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=977MiB (1024MB), run=10004-10005msec 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.067 00:22:31.067 real 0m10.993s 00:22:31.067 user 0m28.890s 00:22:31.067 sys 0m1.949s 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.067 14:10:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:31.067 ************************************ 00:22:31.067 END TEST fio_dif_digest 00:22:31.067 ************************************ 00:22:31.067 14:10:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:31.067 14:10:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:22:31.067 14:10:38 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:31.067 14:10:38 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:22:31.067 14:10:38 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.067 14:10:38 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:22:31.067 14:10:38 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.067 14:10:38 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.067 rmmod nvme_tcp 00:22:31.067 rmmod nvme_fabrics 00:22:31.067 rmmod nvme_keyring 00:22:31.067 14:10:38 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.067 14:10:38 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:22:31.067 14:10:38 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:22:31.067 14:10:38 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 82108 ']' 00:22:31.067 14:10:38 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 82108 00:22:31.067 14:10:38 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 82108 ']' 00:22:31.067 14:10:38 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 82108 00:22:31.067 14:10:38 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:22:31.067 14:10:38 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.067 14:10:38 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82108 00:22:31.067 killing process with pid 82108 00:22:31.067 14:10:38 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:31.067 14:10:38 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:31.067 14:10:38 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82108' 00:22:31.067 14:10:38 nvmf_dif -- common/autotest_common.sh@969 -- # kill 82108 00:22:31.067 14:10:38 nvmf_dif -- common/autotest_common.sh@974 -- # wait 82108 00:22:31.067 14:10:39 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:22:31.067 14:10:39 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:31.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:31.067 Waiting for block devices as requested 00:22:31.067 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:31.067 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:31.067 14:10:39 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:31.067 14:10:39 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:31.067 14:10:39 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.067 14:10:39 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.067 14:10:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.067 14:10:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:31.067 14:10:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.067 14:10:39 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:31.067 00:22:31.067 real 0m59.918s 00:22:31.067 user 3m52.098s 00:22:31.067 sys 0m16.530s 00:22:31.067 14:10:39 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.067 ************************************ 00:22:31.067 END TEST nvmf_dif 00:22:31.067 ************************************ 00:22:31.067 14:10:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:31.067 14:10:39 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:31.067 14:10:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:31.068 14:10:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:31.068 14:10:39 -- common/autotest_common.sh@10 -- # set +x 00:22:31.068 ************************************ 00:22:31.068 START TEST nvmf_abort_qd_sizes 00:22:31.068 ************************************ 00:22:31.068 14:10:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:31.068 * Looking for test storage... 00:22:31.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:31.068 Cannot find device "nvmf_tgt_br" 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:31.068 Cannot find device "nvmf_tgt_br2" 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:31.068 Cannot find device "nvmf_tgt_br" 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:31.068 Cannot find device "nvmf_tgt_br2" 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:31.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:31.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:31.068 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:31.069 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:31.069 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:31.069 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:31.069 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:31.069 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:31.069 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:31.069 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:31.327 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:31.328 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:31.328 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:31.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:31.328 00:22:31.328 --- 10.0.0.2 ping statistics --- 00:22:31.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.328 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:31.328 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:31.328 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:31.328 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:22:31.328 00:22:31.328 --- 10.0.0.3 ping statistics --- 00:22:31.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.328 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:31.328 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:31.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:31.328 00:22:31.328 --- 10.0.0.1 ping statistics --- 00:22:31.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.328 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:31.328 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.328 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:22:31.328 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:22:31.328 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:31.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:32.227 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:32.227 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=83470 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 83470 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 83470 ']' 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.227 14:10:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:32.227 [2024-07-25 14:10:41.495940] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:22:32.227 [2024-07-25 14:10:41.496006] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.486 [2024-07-25 14:10:41.633727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.486 [2024-07-25 14:10:41.736165] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.486 [2024-07-25 14:10:41.736216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.486 [2024-07-25 14:10:41.736222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.486 [2024-07-25 14:10:41.736227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.486 [2024-07-25 14:10:41.736230] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.486 [2024-07-25 14:10:41.737276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.486 [2024-07-25 14:10:41.737389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.486 [2024-07-25 14:10:41.737459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.486 [2024-07-25 14:10:41.737460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.486 [2024-07-25 14:10:41.779926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:33.055 14:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.055 14:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:22:33.055 14:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.055 14:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.055 14:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:33.055 14:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.055 14:10:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:33.055 14:10:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:33.314 14:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:33.314 ************************************ 00:22:33.314 START TEST spdk_target_abort 00:22:33.314 ************************************ 00:22:33.314 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:33.315 spdk_targetn1 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:33.315 [2024-07-25 14:10:42.499513] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:33.315 [2024-07-25 14:10:42.531614] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:33.315 14:10:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:36.604 Initializing NVMe Controllers 00:22:36.605 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:36.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:36.605 Initialization complete. Launching workers. 00:22:36.605 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11970, failed: 0 00:22:36.605 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1079, failed to submit 10891 00:22:36.605 success 752, unsuccess 327, failed 0 00:22:36.605 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:36.605 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:39.899 Initializing NVMe Controllers 00:22:39.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:39.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:39.899 Initialization complete. Launching workers. 00:22:39.899 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:22:39.899 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1152, failed to submit 7848 00:22:39.900 success 358, unsuccess 794, failed 0 00:22:39.900 14:10:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:39.900 14:10:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:43.185 Initializing NVMe Controllers 00:22:43.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:43.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:43.186 Initialization complete. Launching workers. 00:22:43.186 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33020, failed: 0 00:22:43.186 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2421, failed to submit 30599 00:22:43.186 success 550, unsuccess 1871, failed 0 00:22:43.186 14:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:22:43.186 14:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.186 14:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:43.186 14:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.186 14:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:22:43.186 14:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.186 14:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:45.092 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.092 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83470 00:22:45.092 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 83470 ']' 00:22:45.092 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 83470 00:22:45.092 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:22:45.092 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.092 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83470 00:22:45.092 killing process with pid 83470 00:22:45.092 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:45.092 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:45.092 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83470' 00:22:45.092 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 83470 00:22:45.092 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 83470 00:22:45.092 ************************************ 00:22:45.092 END TEST spdk_target_abort 00:22:45.092 ************************************ 00:22:45.092 00:22:45.092 real 0m11.705s 00:22:45.092 user 0m47.187s 00:22:45.092 sys 0m1.868s 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:45.092 14:10:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:22:45.092 14:10:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:45.092 14:10:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.092 14:10:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:45.092 ************************************ 00:22:45.092 START TEST kernel_target_abort 00:22:45.092 ************************************ 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:45.092 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:45.093 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:45.093 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:45.093 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:45.093 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:22:45.093 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:45.093 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:45.093 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:45.093 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:45.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:45.663 Waiting for block devices as requested 00:22:45.663 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:45.663 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:45.663 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:45.663 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:45.663 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:45.663 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:45.663 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:45.663 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:45.663 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:45.663 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:45.663 14:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:45.925 No valid GPT data, bailing 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:45.925 No valid GPT data, bailing 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:45.925 No valid GPT data, bailing 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:45.925 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:45.925 No valid GPT data, bailing 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 --hostid=ae1cc223-8955-4554-9c53-a88c4ce7ab12 -a 10.0.0.1 -t tcp -s 4420 00:22:46.183 00:22:46.183 Discovery Log Number of Records 2, Generation counter 2 00:22:46.183 =====Discovery Log Entry 0====== 00:22:46.183 trtype: tcp 00:22:46.183 adrfam: ipv4 00:22:46.183 subtype: current discovery subsystem 00:22:46.183 treq: not specified, sq flow control disable supported 00:22:46.183 portid: 1 00:22:46.183 trsvcid: 4420 00:22:46.183 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:46.183 traddr: 10.0.0.1 00:22:46.183 eflags: none 00:22:46.183 sectype: none 00:22:46.183 =====Discovery Log Entry 1====== 00:22:46.183 trtype: tcp 00:22:46.183 adrfam: ipv4 00:22:46.183 subtype: nvme subsystem 00:22:46.183 treq: not specified, sq flow control disable supported 00:22:46.183 portid: 1 00:22:46.183 trsvcid: 4420 00:22:46.183 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:46.183 traddr: 10.0.0.1 00:22:46.183 eflags: none 00:22:46.183 sectype: none 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:46.183 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:49.469 Initializing NVMe Controllers 00:22:49.469 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:49.469 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:49.469 Initialization complete. Launching workers. 00:22:49.469 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42144, failed: 0 00:22:49.469 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 42144, failed to submit 0 00:22:49.469 success 0, unsuccess 42144, failed 0 00:22:49.469 14:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:49.469 14:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:52.792 Initializing NVMe Controllers 00:22:52.792 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:52.792 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:52.792 Initialization complete. Launching workers. 00:22:52.792 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84474, failed: 0 00:22:52.792 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39360, failed to submit 45114 00:22:52.792 success 0, unsuccess 39360, failed 0 00:22:52.792 14:11:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:52.792 14:11:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:56.081 Initializing NVMe Controllers 00:22:56.081 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:56.081 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:56.081 Initialization complete. Launching workers. 00:22:56.081 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107403, failed: 0 00:22:56.081 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26834, failed to submit 80569 00:22:56.081 success 0, unsuccess 26834, failed 0 00:22:56.081 14:11:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:56.081 14:11:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:56.081 14:11:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:22:56.081 14:11:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:56.081 14:11:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:56.081 14:11:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:56.081 14:11:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:56.081 14:11:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:56.081 14:11:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:56.081 14:11:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:56.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:04.784 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:04.784 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:04.784 00:23:04.784 real 0m18.541s 00:23:04.784 user 0m7.183s 00:23:04.784 sys 0m9.111s 00:23:04.784 14:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:04.784 14:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.784 ************************************ 00:23:04.784 END TEST kernel_target_abort 00:23:04.784 ************************************ 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.784 rmmod nvme_tcp 00:23:04.784 rmmod nvme_fabrics 00:23:04.784 rmmod nvme_keyring 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 83470 ']' 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 83470 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 83470 ']' 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 83470 00:23:04.784 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (83470) - No such process 00:23:04.784 Process with pid 83470 is not found 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 83470 is not found' 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:04.784 14:11:12 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:04.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:04.784 Waiting for block devices as requested 00:23:04.784 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:04.784 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:04.784 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.784 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.784 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.784 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.784 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.784 14:11:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:04.784 14:11:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.784 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:04.784 00:23:04.784 real 0m33.650s 00:23:04.784 user 0m55.564s 00:23:04.784 sys 0m12.592s 00:23:04.784 14:11:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:04.784 14:11:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:04.784 ************************************ 00:23:04.784 END TEST nvmf_abort_qd_sizes 00:23:04.784 ************************************ 00:23:04.784 14:11:13 -- spdk/autotest.sh@299 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:04.784 14:11:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:04.784 14:11:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:04.784 14:11:13 -- common/autotest_common.sh@10 -- # set +x 00:23:04.784 ************************************ 00:23:04.784 START TEST keyring_file 00:23:04.784 ************************************ 00:23:04.784 14:11:13 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:04.784 * Looking for test storage... 00:23:04.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:04.784 14:11:13 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:04.784 14:11:13 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.784 14:11:13 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:04.784 14:11:13 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.785 14:11:13 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.785 14:11:13 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.785 14:11:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.785 14:11:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.785 14:11:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.785 14:11:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:23:04.785 14:11:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@47 -- # : 0 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:04.785 14:11:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:04.785 14:11:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:04.785 14:11:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:04.785 14:11:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:04.785 14:11:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:04.785 14:11:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.t5Iqmg15Ee 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.t5Iqmg15Ee 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.t5Iqmg15Ee 00:23:04.785 14:11:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.t5Iqmg15Ee 00:23:04.785 14:11:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BcEppAWbqI 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:04.785 14:11:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BcEppAWbqI 00:23:04.785 14:11:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BcEppAWbqI 00:23:04.785 14:11:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BcEppAWbqI 00:23:04.785 14:11:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=84434 00:23:04.785 14:11:13 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:04.785 14:11:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84434 00:23:04.785 14:11:13 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84434 ']' 00:23:04.785 14:11:13 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.785 14:11:13 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:04.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.785 14:11:13 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.785 14:11:13 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:04.785 14:11:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:04.785 [2024-07-25 14:11:14.001344] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:23:04.785 [2024-07-25 14:11:14.001410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84434 ] 00:23:05.046 [2024-07-25 14:11:14.139271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.046 [2024-07-25 14:11:14.236185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.046 [2024-07-25 14:11:14.277730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:23:05.616 14:11:14 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:05.616 [2024-07-25 14:11:14.847586] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.616 null0 00:23:05.616 [2024-07-25 14:11:14.883525] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.616 [2024-07-25 14:11:14.883715] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:05.616 [2024-07-25 14:11:14.891443] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.616 14:11:14 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:05.616 [2024-07-25 14:11:14.907429] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:23:05.616 request: 00:23:05.616 { 00:23:05.616 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:05.616 "secure_channel": false, 00:23:05.616 "listen_address": { 00:23:05.616 "trtype": "tcp", 00:23:05.616 "traddr": "127.0.0.1", 00:23:05.616 "trsvcid": "4420" 00:23:05.616 }, 00:23:05.616 "method": "nvmf_subsystem_add_listener", 00:23:05.616 "req_id": 1 00:23:05.616 } 00:23:05.616 Got JSON-RPC error response 00:23:05.616 response: 00:23:05.616 { 00:23:05.616 "code": -32602, 00:23:05.616 "message": "Invalid parameters" 00:23:05.616 } 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:05.616 14:11:14 keyring_file -- keyring/file.sh@46 -- # bperfpid=84450 00:23:05.616 14:11:14 keyring_file -- keyring/file.sh@48 -- # waitforlisten 84450 /var/tmp/bperf.sock 00:23:05.616 14:11:14 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84450 ']' 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:05.616 14:11:14 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:05.876 14:11:14 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:05.876 14:11:14 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.876 14:11:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:05.876 [2024-07-25 14:11:14.969245] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:23:05.876 [2024-07-25 14:11:14.969321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84450 ] 00:23:05.876 [2024-07-25 14:11:15.105728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.136 [2024-07-25 14:11:15.210834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.136 [2024-07-25 14:11:15.252237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:06.706 14:11:15 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:06.706 14:11:15 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:23:06.706 14:11:15 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.t5Iqmg15Ee 00:23:06.706 14:11:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.t5Iqmg15Ee 00:23:06.965 14:11:16 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BcEppAWbqI 00:23:06.965 14:11:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BcEppAWbqI 00:23:06.965 14:11:16 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:23:06.965 14:11:16 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:23:06.965 14:11:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:06.965 14:11:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:06.965 14:11:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:07.224 14:11:16 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.t5Iqmg15Ee == \/\t\m\p\/\t\m\p\.\t\5\I\q\m\g\1\5\E\e ]] 00:23:07.224 14:11:16 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:23:07.224 14:11:16 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:23:07.224 14:11:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:07.224 14:11:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:07.224 14:11:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:07.483 14:11:16 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BcEppAWbqI == \/\t\m\p\/\t\m\p\.\B\c\E\p\p\A\W\b\q\I ]] 00:23:07.483 14:11:16 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:23:07.483 14:11:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:07.483 14:11:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:07.483 14:11:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:07.483 14:11:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:07.483 14:11:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:07.743 14:11:16 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:07.743 14:11:16 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:23:07.743 14:11:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:07.743 14:11:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:07.743 14:11:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:07.743 14:11:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:07.743 14:11:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.001 14:11:17 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:08.002 14:11:17 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:08.002 14:11:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:08.002 [2024-07-25 14:11:17.207908] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.002 nvme0n1 00:23:08.002 14:11:17 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:23:08.261 14:11:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:08.261 14:11:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:08.261 14:11:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:08.261 14:11:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.261 14:11:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:08.261 14:11:17 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:08.261 14:11:17 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:23:08.261 14:11:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:08.261 14:11:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:08.261 14:11:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:08.261 14:11:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:08.261 14:11:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.520 14:11:17 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:08.520 14:11:17 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:08.520 Running I/O for 1 seconds... 00:23:09.930 00:23:09.930 Latency(us) 00:23:09.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.930 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:09.930 nvme0n1 : 1.00 19029.60 74.33 0.00 0.00 6712.10 3334.04 15682.85 00:23:09.930 =================================================================================================================== 00:23:09.930 Total : 19029.60 74.33 0.00 0.00 6712.10 3334.04 15682.85 00:23:09.930 0 00:23:09.930 14:11:18 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:09.930 14:11:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:09.930 14:11:19 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:23:09.930 14:11:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:09.930 14:11:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:09.930 14:11:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:09.930 14:11:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:09.930 14:11:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:10.190 14:11:19 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:10.190 14:11:19 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:23:10.190 14:11:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:10.190 14:11:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:10.190 14:11:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:10.190 14:11:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:10.190 14:11:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:10.190 14:11:19 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:10.190 14:11:19 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:10.190 14:11:19 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:23:10.190 14:11:19 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:10.190 14:11:19 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:23:10.190 14:11:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.190 14:11:19 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:23:10.190 14:11:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.190 14:11:19 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:10.190 14:11:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:10.449 [2024-07-25 14:11:19.634183] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:10.449 [2024-07-25 14:11:19.634901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199c4f0 (107): Transport endpoint is not connected 00:23:10.449 [2024-07-25 14:11:19.635888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199c4f0 (9): Bad file descriptor 00:23:10.449 [2024-07-25 14:11:19.636884] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:10.449 [2024-07-25 14:11:19.636903] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:10.449 [2024-07-25 14:11:19.636909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:10.449 request: 00:23:10.449 { 00:23:10.449 "name": "nvme0", 00:23:10.449 "trtype": "tcp", 00:23:10.449 "traddr": "127.0.0.1", 00:23:10.449 "adrfam": "ipv4", 00:23:10.450 "trsvcid": "4420", 00:23:10.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.450 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:10.450 "prchk_reftag": false, 00:23:10.450 "prchk_guard": false, 00:23:10.450 "hdgst": false, 00:23:10.450 "ddgst": false, 00:23:10.450 "psk": "key1", 00:23:10.450 "method": "bdev_nvme_attach_controller", 00:23:10.450 "req_id": 1 00:23:10.450 } 00:23:10.450 Got JSON-RPC error response 00:23:10.450 response: 00:23:10.450 { 00:23:10.450 "code": -5, 00:23:10.450 "message": "Input/output error" 00:23:10.450 } 00:23:10.450 14:11:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:23:10.450 14:11:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.450 14:11:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.450 14:11:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.450 14:11:19 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:23:10.450 14:11:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:10.450 14:11:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:10.450 14:11:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:10.450 14:11:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:10.450 14:11:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:10.709 14:11:19 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:10.709 14:11:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:23:10.709 14:11:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:10.709 14:11:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:10.709 14:11:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:10.709 14:11:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:10.709 14:11:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:10.968 14:11:20 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:10.968 14:11:20 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:10.968 14:11:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:10.968 14:11:20 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:10.968 14:11:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:11.227 14:11:20 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:11.227 14:11:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:11.227 14:11:20 keyring_file -- keyring/file.sh@77 -- # jq length 00:23:11.486 14:11:20 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:11.486 14:11:20 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.t5Iqmg15Ee 00:23:11.486 14:11:20 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.t5Iqmg15Ee 00:23:11.486 14:11:20 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:23:11.486 14:11:20 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.t5Iqmg15Ee 00:23:11.486 14:11:20 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:23:11.486 14:11:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.486 14:11:20 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:23:11.486 14:11:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.486 14:11:20 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.t5Iqmg15Ee 00:23:11.486 14:11:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.t5Iqmg15Ee 00:23:11.746 [2024-07-25 14:11:20.865074] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.t5Iqmg15Ee': 0100660 00:23:11.746 [2024-07-25 14:11:20.865113] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:11.746 request: 00:23:11.746 { 00:23:11.746 "name": "key0", 00:23:11.746 "path": "/tmp/tmp.t5Iqmg15Ee", 00:23:11.746 "method": "keyring_file_add_key", 00:23:11.746 "req_id": 1 00:23:11.746 } 00:23:11.746 Got JSON-RPC error response 00:23:11.746 response: 00:23:11.746 { 00:23:11.746 "code": -1, 00:23:11.746 "message": "Operation not permitted" 00:23:11.746 } 00:23:11.746 14:11:20 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:23:11.746 14:11:20 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.746 14:11:20 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.746 14:11:20 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.746 14:11:20 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.t5Iqmg15Ee 00:23:11.746 14:11:20 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.t5Iqmg15Ee 00:23:11.746 14:11:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.t5Iqmg15Ee 00:23:12.005 14:11:21 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.t5Iqmg15Ee 00:23:12.005 14:11:21 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:23:12.005 14:11:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:12.005 14:11:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:12.005 14:11:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:12.005 14:11:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:12.005 14:11:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:12.005 14:11:21 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:12.005 14:11:21 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:12.005 14:11:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:23:12.005 14:11:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:12.005 14:11:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:23:12.005 14:11:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.005 14:11:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:23:12.005 14:11:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.005 14:11:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:12.005 14:11:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:12.265 [2024-07-25 14:11:21.440094] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.t5Iqmg15Ee': No such file or directory 00:23:12.265 [2024-07-25 14:11:21.440134] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:12.265 [2024-07-25 14:11:21.440152] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:12.265 [2024-07-25 14:11:21.440157] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:12.265 [2024-07-25 14:11:21.440162] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:12.265 request: 00:23:12.265 { 00:23:12.265 "name": "nvme0", 00:23:12.265 "trtype": "tcp", 00:23:12.265 "traddr": "127.0.0.1", 00:23:12.265 "adrfam": "ipv4", 00:23:12.265 "trsvcid": "4420", 00:23:12.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:12.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:12.265 "prchk_reftag": false, 00:23:12.265 "prchk_guard": false, 00:23:12.265 "hdgst": false, 00:23:12.265 "ddgst": false, 00:23:12.265 "psk": "key0", 00:23:12.265 "method": "bdev_nvme_attach_controller", 00:23:12.265 "req_id": 1 00:23:12.265 } 00:23:12.265 Got JSON-RPC error response 00:23:12.265 response: 00:23:12.265 { 00:23:12.265 "code": -19, 00:23:12.265 "message": "No such device" 00:23:12.265 } 00:23:12.265 14:11:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:23:12.265 14:11:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.265 14:11:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.265 14:11:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.265 14:11:21 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:12.265 14:11:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:12.524 14:11:21 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:12.524 14:11:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:12.524 14:11:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:12.524 14:11:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:12.524 14:11:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:12.524 14:11:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:12.524 14:11:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xEmZNBWINz 00:23:12.524 14:11:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:12.524 14:11:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:12.524 14:11:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:12.524 14:11:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:12.524 14:11:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:12.524 14:11:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:12.524 14:11:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:12.524 14:11:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xEmZNBWINz 00:23:12.524 14:11:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xEmZNBWINz 00:23:12.524 14:11:21 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.xEmZNBWINz 00:23:12.524 14:11:21 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xEmZNBWINz 00:23:12.524 14:11:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xEmZNBWINz 00:23:12.783 14:11:21 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:12.783 14:11:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:13.041 nvme0n1 00:23:13.041 14:11:22 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:23:13.041 14:11:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:13.042 14:11:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:13.042 14:11:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:13.042 14:11:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:13.042 14:11:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:13.300 14:11:22 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:13.300 14:11:22 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:13.300 14:11:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:13.300 14:11:22 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:23:13.300 14:11:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:13.300 14:11:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:13.300 14:11:22 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:23:13.300 14:11:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:13.559 14:11:22 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:13.559 14:11:22 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:23:13.559 14:11:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:13.559 14:11:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:13.559 14:11:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:13.559 14:11:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:13.559 14:11:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:13.818 14:11:22 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:23:13.818 14:11:22 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:13.818 14:11:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:14.121 14:11:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:23:14.121 14:11:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:14.121 14:11:23 keyring_file -- keyring/file.sh@104 -- # jq length 00:23:14.121 14:11:23 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:23:14.121 14:11:23 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xEmZNBWINz 00:23:14.121 14:11:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xEmZNBWINz 00:23:14.379 14:11:23 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BcEppAWbqI 00:23:14.379 14:11:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BcEppAWbqI 00:23:14.637 14:11:23 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:14.637 14:11:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:14.895 nvme0n1 00:23:14.895 14:11:23 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:23:14.895 14:11:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:15.155 14:11:24 keyring_file -- keyring/file.sh@112 -- # config='{ 00:23:15.155 "subsystems": [ 00:23:15.155 { 00:23:15.155 "subsystem": "keyring", 00:23:15.155 "config": [ 00:23:15.155 { 00:23:15.155 "method": "keyring_file_add_key", 00:23:15.155 "params": { 00:23:15.155 "name": "key0", 00:23:15.155 "path": "/tmp/tmp.xEmZNBWINz" 00:23:15.155 } 00:23:15.155 }, 00:23:15.155 { 00:23:15.155 "method": "keyring_file_add_key", 00:23:15.155 "params": { 00:23:15.155 "name": "key1", 00:23:15.155 "path": "/tmp/tmp.BcEppAWbqI" 00:23:15.155 } 00:23:15.155 } 00:23:15.155 ] 00:23:15.155 }, 00:23:15.155 { 00:23:15.155 "subsystem": "iobuf", 00:23:15.155 "config": [ 00:23:15.155 { 00:23:15.155 "method": "iobuf_set_options", 00:23:15.155 "params": { 00:23:15.155 "small_pool_count": 8192, 00:23:15.155 "large_pool_count": 1024, 00:23:15.155 "small_bufsize": 8192, 00:23:15.155 "large_bufsize": 135168 00:23:15.155 } 00:23:15.155 } 00:23:15.155 ] 00:23:15.155 }, 00:23:15.155 { 00:23:15.155 "subsystem": "sock", 00:23:15.155 "config": [ 00:23:15.155 { 00:23:15.155 "method": "sock_set_default_impl", 00:23:15.155 "params": { 00:23:15.155 "impl_name": "uring" 00:23:15.155 } 00:23:15.155 }, 00:23:15.155 { 00:23:15.155 "method": "sock_impl_set_options", 00:23:15.155 "params": { 00:23:15.155 "impl_name": "ssl", 00:23:15.155 "recv_buf_size": 4096, 00:23:15.155 "send_buf_size": 4096, 00:23:15.155 "enable_recv_pipe": true, 00:23:15.155 "enable_quickack": false, 00:23:15.155 "enable_placement_id": 0, 00:23:15.155 "enable_zerocopy_send_server": true, 00:23:15.155 "enable_zerocopy_send_client": false, 00:23:15.155 "zerocopy_threshold": 0, 00:23:15.155 "tls_version": 0, 00:23:15.155 "enable_ktls": false 00:23:15.155 } 00:23:15.155 }, 00:23:15.155 { 00:23:15.155 "method": "sock_impl_set_options", 00:23:15.155 "params": { 00:23:15.155 "impl_name": "posix", 00:23:15.155 "recv_buf_size": 2097152, 00:23:15.155 "send_buf_size": 2097152, 00:23:15.155 "enable_recv_pipe": true, 00:23:15.155 "enable_quickack": false, 00:23:15.155 "enable_placement_id": 0, 00:23:15.155 "enable_zerocopy_send_server": true, 00:23:15.155 "enable_zerocopy_send_client": false, 00:23:15.155 "zerocopy_threshold": 0, 00:23:15.155 "tls_version": 0, 00:23:15.155 "enable_ktls": false 00:23:15.155 } 00:23:15.155 }, 00:23:15.155 { 00:23:15.155 "method": "sock_impl_set_options", 00:23:15.155 "params": { 00:23:15.155 "impl_name": "uring", 00:23:15.155 "recv_buf_size": 2097152, 00:23:15.155 "send_buf_size": 2097152, 00:23:15.155 "enable_recv_pipe": true, 00:23:15.155 "enable_quickack": false, 00:23:15.155 "enable_placement_id": 0, 00:23:15.155 "enable_zerocopy_send_server": false, 00:23:15.155 "enable_zerocopy_send_client": false, 00:23:15.155 "zerocopy_threshold": 0, 00:23:15.155 "tls_version": 0, 00:23:15.155 "enable_ktls": false 00:23:15.155 } 00:23:15.155 } 00:23:15.155 ] 00:23:15.155 }, 00:23:15.155 { 00:23:15.155 "subsystem": "vmd", 00:23:15.155 "config": [] 00:23:15.155 }, 00:23:15.155 { 00:23:15.155 "subsystem": "accel", 00:23:15.155 "config": [ 00:23:15.155 { 00:23:15.155 "method": "accel_set_options", 00:23:15.155 "params": { 00:23:15.155 "small_cache_size": 128, 00:23:15.156 "large_cache_size": 16, 00:23:15.156 "task_count": 2048, 00:23:15.156 "sequence_count": 2048, 00:23:15.156 "buf_count": 2048 00:23:15.156 } 00:23:15.156 } 00:23:15.156 ] 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "subsystem": "bdev", 00:23:15.156 "config": [ 00:23:15.156 { 00:23:15.156 "method": "bdev_set_options", 00:23:15.156 "params": { 00:23:15.156 "bdev_io_pool_size": 65535, 00:23:15.156 "bdev_io_cache_size": 256, 00:23:15.156 "bdev_auto_examine": true, 00:23:15.156 "iobuf_small_cache_size": 128, 00:23:15.156 "iobuf_large_cache_size": 16 00:23:15.156 } 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "method": "bdev_raid_set_options", 00:23:15.156 "params": { 00:23:15.156 "process_window_size_kb": 1024, 00:23:15.156 "process_max_bandwidth_mb_sec": 0 00:23:15.156 } 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "method": "bdev_iscsi_set_options", 00:23:15.156 "params": { 00:23:15.156 "timeout_sec": 30 00:23:15.156 } 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "method": "bdev_nvme_set_options", 00:23:15.156 "params": { 00:23:15.156 "action_on_timeout": "none", 00:23:15.156 "timeout_us": 0, 00:23:15.156 "timeout_admin_us": 0, 00:23:15.156 "keep_alive_timeout_ms": 10000, 00:23:15.156 "arbitration_burst": 0, 00:23:15.156 "low_priority_weight": 0, 00:23:15.156 "medium_priority_weight": 0, 00:23:15.156 "high_priority_weight": 0, 00:23:15.156 "nvme_adminq_poll_period_us": 10000, 00:23:15.156 "nvme_ioq_poll_period_us": 0, 00:23:15.156 "io_queue_requests": 512, 00:23:15.156 "delay_cmd_submit": true, 00:23:15.156 "transport_retry_count": 4, 00:23:15.156 "bdev_retry_count": 3, 00:23:15.156 "transport_ack_timeout": 0, 00:23:15.156 "ctrlr_loss_timeout_sec": 0, 00:23:15.156 "reconnect_delay_sec": 0, 00:23:15.156 "fast_io_fail_timeout_sec": 0, 00:23:15.156 "disable_auto_failback": false, 00:23:15.156 "generate_uuids": false, 00:23:15.156 "transport_tos": 0, 00:23:15.156 "nvme_error_stat": false, 00:23:15.156 "rdma_srq_size": 0, 00:23:15.156 "io_path_stat": false, 00:23:15.156 "allow_accel_sequence": false, 00:23:15.156 "rdma_max_cq_size": 0, 00:23:15.156 "rdma_cm_event_timeout_ms": 0, 00:23:15.156 "dhchap_digests": [ 00:23:15.156 "sha256", 00:23:15.156 "sha384", 00:23:15.156 "sha512" 00:23:15.156 ], 00:23:15.156 "dhchap_dhgroups": [ 00:23:15.156 "null", 00:23:15.156 "ffdhe2048", 00:23:15.156 "ffdhe3072", 00:23:15.156 "ffdhe4096", 00:23:15.156 "ffdhe6144", 00:23:15.156 "ffdhe8192" 00:23:15.156 ] 00:23:15.156 } 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "method": "bdev_nvme_attach_controller", 00:23:15.156 "params": { 00:23:15.156 "name": "nvme0", 00:23:15.156 "trtype": "TCP", 00:23:15.156 "adrfam": "IPv4", 00:23:15.156 "traddr": "127.0.0.1", 00:23:15.156 "trsvcid": "4420", 00:23:15.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.156 "prchk_reftag": false, 00:23:15.156 "prchk_guard": false, 00:23:15.156 "ctrlr_loss_timeout_sec": 0, 00:23:15.156 "reconnect_delay_sec": 0, 00:23:15.156 "fast_io_fail_timeout_sec": 0, 00:23:15.156 "psk": "key0", 00:23:15.156 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:15.156 "hdgst": false, 00:23:15.156 "ddgst": false 00:23:15.156 } 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "method": "bdev_nvme_set_hotplug", 00:23:15.156 "params": { 00:23:15.156 "period_us": 100000, 00:23:15.156 "enable": false 00:23:15.156 } 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "method": "bdev_wait_for_examine" 00:23:15.156 } 00:23:15.156 ] 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "subsystem": "nbd", 00:23:15.156 "config": [] 00:23:15.156 } 00:23:15.156 ] 00:23:15.156 }' 00:23:15.156 14:11:24 keyring_file -- keyring/file.sh@114 -- # killprocess 84450 00:23:15.156 14:11:24 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84450 ']' 00:23:15.156 14:11:24 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84450 00:23:15.156 14:11:24 keyring_file -- common/autotest_common.sh@955 -- # uname 00:23:15.156 14:11:24 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:15.156 14:11:24 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84450 00:23:15.156 killing process with pid 84450 00:23:15.156 Received shutdown signal, test time was about 1.000000 seconds 00:23:15.156 00:23:15.156 Latency(us) 00:23:15.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.156 =================================================================================================================== 00:23:15.156 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.156 14:11:24 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:15.156 14:11:24 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:15.156 14:11:24 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84450' 00:23:15.156 14:11:24 keyring_file -- common/autotest_common.sh@969 -- # kill 84450 00:23:15.156 14:11:24 keyring_file -- common/autotest_common.sh@974 -- # wait 84450 00:23:15.156 14:11:24 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:15.156 14:11:24 keyring_file -- keyring/file.sh@117 -- # bperfpid=84678 00:23:15.156 14:11:24 keyring_file -- keyring/file.sh@119 -- # waitforlisten 84678 /var/tmp/bperf.sock 00:23:15.156 14:11:24 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84678 ']' 00:23:15.156 14:11:24 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:23:15.156 "subsystems": [ 00:23:15.156 { 00:23:15.156 "subsystem": "keyring", 00:23:15.156 "config": [ 00:23:15.156 { 00:23:15.156 "method": "keyring_file_add_key", 00:23:15.156 "params": { 00:23:15.156 "name": "key0", 00:23:15.156 "path": "/tmp/tmp.xEmZNBWINz" 00:23:15.156 } 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "method": "keyring_file_add_key", 00:23:15.156 "params": { 00:23:15.156 "name": "key1", 00:23:15.156 "path": "/tmp/tmp.BcEppAWbqI" 00:23:15.156 } 00:23:15.156 } 00:23:15.156 ] 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "subsystem": "iobuf", 00:23:15.156 "config": [ 00:23:15.156 { 00:23:15.156 "method": "iobuf_set_options", 00:23:15.156 "params": { 00:23:15.156 "small_pool_count": 8192, 00:23:15.156 "large_pool_count": 1024, 00:23:15.156 "small_bufsize": 8192, 00:23:15.156 "large_bufsize": 135168 00:23:15.156 } 00:23:15.156 } 00:23:15.156 ] 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "subsystem": "sock", 00:23:15.156 "config": [ 00:23:15.156 { 00:23:15.156 "method": "sock_set_default_impl", 00:23:15.156 "params": { 00:23:15.156 "impl_name": "uring" 00:23:15.156 } 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "method": "sock_impl_set_options", 00:23:15.156 "params": { 00:23:15.156 "impl_name": "ssl", 00:23:15.156 "recv_buf_size": 4096, 00:23:15.156 "send_buf_size": 4096, 00:23:15.156 "enable_recv_pipe": true, 00:23:15.156 "enable_quickack": false, 00:23:15.156 "enable_placement_id": 0, 00:23:15.156 "enable_zerocopy_send_server": true, 00:23:15.156 "enable_zerocopy_send_client": false, 00:23:15.156 "zerocopy_threshold": 0, 00:23:15.156 "tls_version": 0, 00:23:15.156 "enable_ktls": false 00:23:15.156 } 00:23:15.156 }, 00:23:15.156 { 00:23:15.156 "method": "sock_impl_set_options", 00:23:15.156 "params": { 00:23:15.156 "impl_name": "posix", 00:23:15.156 "recv_buf_size": 2097152, 00:23:15.156 "send_buf_size": 2097152, 00:23:15.156 "enable_recv_pipe": true, 00:23:15.156 "enable_quickack": false, 00:23:15.156 "enable_placement_id": 0, 00:23:15.156 "enable_zerocopy_send_server": true, 00:23:15.156 "enable_zerocopy_send_client": false, 00:23:15.156 "zerocopy_threshold": 0, 00:23:15.156 "tls_version": 0, 00:23:15.156 "enable_ktls": false 00:23:15.156 } 00:23:15.156 }, 00:23:15.156 { 00:23:15.157 "method": "sock_impl_set_options", 00:23:15.157 "params": { 00:23:15.157 "impl_name": "uring", 00:23:15.157 "recv_buf_size": 2097152, 00:23:15.157 "send_buf_size": 2097152, 00:23:15.157 "enable_recv_pipe": true, 00:23:15.157 "enable_quickack": false, 00:23:15.157 "enable_placement_id": 0, 00:23:15.157 "enable_zerocopy_send_server": false, 00:23:15.157 "enable_zerocopy_send_client": false, 00:23:15.157 "zerocopy_threshold": 0, 00:23:15.157 "tls_version": 0, 00:23:15.157 "enable_ktls": false 00:23:15.157 } 00:23:15.157 } 00:23:15.157 ] 00:23:15.157 }, 00:23:15.157 { 00:23:15.157 "subsystem": "vmd", 00:23:15.157 "config": [] 00:23:15.157 }, 00:23:15.157 { 00:23:15.157 "subsystem": "accel", 00:23:15.157 "config": [ 00:23:15.157 { 00:23:15.157 "method": "accel_set_options", 00:23:15.157 "params": { 00:23:15.157 "small_cache_size": 128, 00:23:15.157 "large_cache_size": 16, 00:23:15.157 "task_count": 2048, 00:23:15.157 "sequence_count": 2048, 00:23:15.157 "buf_count": 2048 00:23:15.157 } 00:23:15.157 } 00:23:15.157 ] 00:23:15.157 }, 00:23:15.157 { 00:23:15.157 "subsystem": "bdev", 00:23:15.157 "config": [ 00:23:15.157 { 00:23:15.157 "method": "bdev_set_options", 00:23:15.157 "params": { 00:23:15.157 "bdev_io_pool_size": 65535, 00:23:15.157 "bdev_io_cache_size": 256, 00:23:15.157 "bdev_auto_examine": true, 00:23:15.157 "iobuf_small_cache_size": 128, 00:23:15.157 "iobuf_large_cache_size": 16 00:23:15.157 } 00:23:15.157 }, 00:23:15.157 { 00:23:15.157 "method": "bdev_raid_set_options", 00:23:15.157 "params": { 00:23:15.157 "process_window_size_kb": 1024, 00:23:15.157 "process_max_bandwidth_mb_sec": 0 00:23:15.157 } 00:23:15.157 }, 00:23:15.157 { 00:23:15.157 "method": "bdev_iscsi_set_options", 00:23:15.157 "params": { 00:23:15.157 "timeout_sec": 30 00:23:15.157 } 00:23:15.157 }, 00:23:15.157 { 00:23:15.157 "method": "bdev_nvme_set_options", 00:23:15.157 "params": { 00:23:15.157 "action_on_timeout": "none", 00:23:15.157 "timeout_us": 0, 00:23:15.157 "timeout_admin_us": 0, 00:23:15.157 "keep_alive_timeout_ms": 10000, 00:23:15.157 "arbitration_burst": 0, 00:23:15.157 "low_priority_weight": 0, 00:23:15.157 "medium_priority_weight": 0, 00:23:15.157 "high_priority_weight": 0, 00:23:15.157 "nvme_adminq_poll_period_us": 10000, 00:23:15.157 "nvme_ioq_poll_period_us": 0, 00:23:15.157 "io_queue_requests": 512, 00:23:15.157 "delay_cmd_submit": true, 00:23:15.157 "transport_retry_count": 4, 00:23:15.157 "bdev_retry_count": 3, 00:23:15.157 "transport_ack_timeout": 0, 00:23:15.157 "ctrlr_loss_timeout_sec": 0, 00:23:15.157 "reconnect_delay_sec": 0, 00:23:15.157 "fast_io_fail_timeout_sec": 0, 00:23:15.157 "disable_auto_failback": false, 00:23:15.157 "generate_uuids": false, 00:23:15.157 "transport_tos": 0, 00:23:15.157 "nvme_error_stat": false, 00:23:15.157 "rdma_srq_size": 0, 00:23:15.157 "io_path_stat": false, 00:23:15.157 "allow_accel_sequence": false, 00:23:15.157 "rdma_max_cq_size": 0, 00:23:15.157 "rdma_cm_event_timeout_ms": 0, 00:23:15.157 "dhchap_digests": [ 00:23:15.157 "sha256", 00:23:15.157 "sha384", 00:23:15.157 "sha512" 00:23:15.157 ], 00:23:15.157 "dhchap_dhgroups": [ 00:23:15.157 "null", 00:23:15.157 "ffdhe2048", 00:23:15.157 "ffdhe3072", 00:23:15.157 "ffdhe4096", 00:23:15.157 "ffdhe6144", 00:23:15.157 "ffdhe8192" 00:23:15.157 ] 00:23:15.157 } 00:23:15.157 }, 00:23:15.157 { 00:23:15.157 "method": "bdev_nvme_attach_controller", 00:23:15.157 "params": { 00:23:15.157 "name": "nvme0", 00:23:15.157 "trtype": "TCP", 00:23:15.157 "adrfam": "IPv4", 00:23:15.157 "traddr": "127.0.0.1", 00:23:15.157 "trsvcid": "4420", 00:23:15.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.157 "prchk_reftag": false, 00:23:15.157 "prchk_guard": false, 00:23:15.157 "ctrlr_loss_timeout_sec": 0, 00:23:15.157 "reconnect_delay_sec": 0, 00:23:15.157 "fast_io_fail_timeout_sec": 0, 00:23:15.157 "psk": "key0", 00:23:15.157 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:15.157 "hdgst": false, 00:23:15.157 "ddgst": false 00:23:15.157 } 00:23:15.157 }, 00:23:15.157 { 00:23:15.157 "method": "bdev_nvme_set_hotplug", 00:23:15.157 "params": { 00:23:15.157 "period_us": 100000, 00:23:15.157 "enable": false 00:23:15.157 } 00:23:15.157 }, 00:23:15.157 { 00:23:15.157 "method": "bdev_wait_for_examine" 00:23:15.157 } 00:23:15.157 ] 00:23:15.157 }, 00:23:15.157 { 00:23:15.157 "subsystem": "nbd", 00:23:15.157 "config": [] 00:23:15.157 } 00:23:15.157 ] 00:23:15.157 }' 00:23:15.157 14:11:24 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:15.157 14:11:24 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.157 14:11:24 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:15.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:15.157 14:11:24 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.157 14:11:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:15.416 [2024-07-25 14:11:24.481196] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:23:15.416 [2024-07-25 14:11:24.481259] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84678 ] 00:23:15.416 [2024-07-25 14:11:24.615715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.416 [2024-07-25 14:11:24.707854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.675 [2024-07-25 14:11:24.829147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:15.675 [2024-07-25 14:11:24.876658] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.244 14:11:25 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.244 14:11:25 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:23:16.244 14:11:25 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:23:16.244 14:11:25 keyring_file -- keyring/file.sh@120 -- # jq length 00:23:16.244 14:11:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:16.244 14:11:25 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:23:16.244 14:11:25 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:23:16.244 14:11:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:16.244 14:11:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:16.244 14:11:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:16.244 14:11:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:16.244 14:11:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:16.503 14:11:25 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:16.503 14:11:25 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:23:16.503 14:11:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:16.503 14:11:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:16.503 14:11:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:16.503 14:11:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:16.503 14:11:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:16.762 14:11:25 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:23:16.762 14:11:25 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:23:16.762 14:11:25 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:23:16.762 14:11:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:17.022 14:11:26 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:23:17.022 14:11:26 keyring_file -- keyring/file.sh@1 -- # cleanup 00:23:17.022 14:11:26 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.xEmZNBWINz /tmp/tmp.BcEppAWbqI 00:23:17.022 14:11:26 keyring_file -- keyring/file.sh@20 -- # killprocess 84678 00:23:17.022 14:11:26 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84678 ']' 00:23:17.022 14:11:26 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84678 00:23:17.022 14:11:26 keyring_file -- common/autotest_common.sh@955 -- # uname 00:23:17.022 14:11:26 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.022 14:11:26 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84678 00:23:17.022 killing process with pid 84678 00:23:17.022 Received shutdown signal, test time was about 1.000000 seconds 00:23:17.022 00:23:17.022 Latency(us) 00:23:17.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.022 =================================================================================================================== 00:23:17.022 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.022 14:11:26 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:17.022 14:11:26 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:17.022 14:11:26 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84678' 00:23:17.022 14:11:26 keyring_file -- common/autotest_common.sh@969 -- # kill 84678 00:23:17.022 14:11:26 keyring_file -- common/autotest_common.sh@974 -- # wait 84678 00:23:17.282 14:11:26 keyring_file -- keyring/file.sh@21 -- # killprocess 84434 00:23:17.282 14:11:26 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84434 ']' 00:23:17.282 14:11:26 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84434 00:23:17.282 14:11:26 keyring_file -- common/autotest_common.sh@955 -- # uname 00:23:17.282 14:11:26 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.282 14:11:26 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84434 00:23:17.282 killing process with pid 84434 00:23:17.282 14:11:26 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:17.282 14:11:26 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:17.282 14:11:26 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84434' 00:23:17.282 14:11:26 keyring_file -- common/autotest_common.sh@969 -- # kill 84434 00:23:17.282 [2024-07-25 14:11:26.369090] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:17.282 14:11:26 keyring_file -- common/autotest_common.sh@974 -- # wait 84434 00:23:17.541 00:23:17.541 real 0m13.016s 00:23:17.541 user 0m31.466s 00:23:17.541 sys 0m2.812s 00:23:17.541 14:11:26 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:17.541 14:11:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:17.541 ************************************ 00:23:17.541 END TEST keyring_file 00:23:17.541 ************************************ 00:23:17.541 14:11:26 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:23:17.541 14:11:26 -- spdk/autotest.sh@301 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:17.541 14:11:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:17.541 14:11:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:17.541 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:23:17.541 ************************************ 00:23:17.541 START TEST keyring_linux 00:23:17.541 ************************************ 00:23:17.541 14:11:26 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:17.801 * Looking for test storage... 00:23:17.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:17.801 14:11:26 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=ae1cc223-8955-4554-9c53-a88c4ce7ab12 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:17.801 14:11:26 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.801 14:11:26 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.801 14:11:26 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.801 14:11:26 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.801 14:11:26 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.801 14:11:26 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.801 14:11:26 keyring_linux -- paths/export.sh@5 -- # export PATH 00:23:17.801 14:11:26 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:17.801 14:11:26 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:17.801 14:11:26 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:17.801 14:11:26 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:23:17.801 14:11:26 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:23:17.801 14:11:26 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:23:17.801 14:11:26 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@705 -- # python - 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:23:17.801 /tmp/:spdk-test:key0 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:23:17.801 14:11:26 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:23:17.801 14:11:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:23:17.801 14:11:26 keyring_linux -- nvmf/common.sh@705 -- # python - 00:23:17.801 14:11:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:23:17.801 /tmp/:spdk-test:key1 00:23:17.801 14:11:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:23:17.801 14:11:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84791 00:23:17.801 14:11:27 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.801 14:11:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84791 00:23:17.801 14:11:27 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 84791 ']' 00:23:17.801 14:11:27 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.801 14:11:27 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:17.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.802 14:11:27 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.802 14:11:27 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:17.802 14:11:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:17.802 [2024-07-25 14:11:27.078647] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:23:17.802 [2024-07-25 14:11:27.078712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84791 ] 00:23:18.060 [2024-07-25 14:11:27.209475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.061 [2024-07-25 14:11:27.303172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.061 [2024-07-25 14:11:27.343782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:18.628 14:11:27 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.628 14:11:27 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:23:18.628 14:11:27 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:23:18.628 14:11:27 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.628 14:11:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:18.628 [2024-07-25 14:11:27.905808] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.628 null0 00:23:18.888 [2024-07-25 14:11:27.937717] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.888 [2024-07-25 14:11:27.937916] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:18.888 14:11:27 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.888 14:11:27 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:23:18.888 432086752 00:23:18.888 14:11:27 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:23:18.888 595556409 00:23:18.888 14:11:27 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84809 00:23:18.888 14:11:27 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:23:18.888 14:11:27 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84809 /var/tmp/bperf.sock 00:23:18.888 14:11:27 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 84809 ']' 00:23:18.888 14:11:27 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:18.888 14:11:27 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:18.888 14:11:27 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:18.888 14:11:27 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.888 14:11:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:18.888 [2024-07-25 14:11:28.020204] Starting SPDK v24.09-pre git sha1 208b98e37 / DPDK 24.03.0 initialization... 00:23:18.888 [2024-07-25 14:11:28.020267] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84809 ] 00:23:18.888 [2024-07-25 14:11:28.155308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.147 [2024-07-25 14:11:28.237681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.715 14:11:28 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.715 14:11:28 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:23:19.715 14:11:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:23:19.715 14:11:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:23:19.973 14:11:29 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:23:19.973 14:11:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:19.973 [2024-07-25 14:11:29.251673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:20.232 14:11:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:20.232 14:11:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:20.232 [2024-07-25 14:11:29.477090] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.490 nvme0n1 00:23:20.490 14:11:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:23:20.490 14:11:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:23:20.490 14:11:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:20.490 14:11:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:20.490 14:11:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:20.490 14:11:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:20.490 14:11:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:23:20.490 14:11:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:20.490 14:11:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:23:20.490 14:11:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:23:20.490 14:11:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:23:20.490 14:11:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:20.490 14:11:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:20.748 14:11:29 keyring_linux -- keyring/linux.sh@25 -- # sn=432086752 00:23:20.748 14:11:29 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:23:20.748 14:11:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:20.748 14:11:29 keyring_linux -- keyring/linux.sh@26 -- # [[ 432086752 == \4\3\2\0\8\6\7\5\2 ]] 00:23:20.748 14:11:29 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 432086752 00:23:20.748 14:11:29 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:23:20.748 14:11:29 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:21.005 Running I/O for 1 seconds... 00:23:21.940 00:23:21.940 Latency(us) 00:23:21.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.940 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:21.940 nvme0n1 : 1.01 20727.98 80.97 0.00 0.00 6152.33 5265.77 11447.34 00:23:21.940 =================================================================================================================== 00:23:21.940 Total : 20727.98 80.97 0.00 0.00 6152.33 5265.77 11447.34 00:23:21.940 0 00:23:21.940 14:11:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:21.940 14:11:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:22.198 14:11:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:23:22.198 14:11:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:23:22.198 14:11:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:22.198 14:11:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:22.198 14:11:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:22.198 14:11:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:22.198 14:11:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:23:22.198 14:11:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:22.198 14:11:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:23:22.198 14:11:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:22.198 14:11:31 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:23:22.198 14:11:31 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:22.198 14:11:31 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:23:22.198 14:11:31 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.198 14:11:31 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:23:22.198 14:11:31 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.198 14:11:31 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:22.198 14:11:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:22.457 [2024-07-25 14:11:31.642861] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:22.457 [2024-07-25 14:11:31.643582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1895460 (107): Transport endpoint is not connected 00:23:22.457 [2024-07-25 14:11:31.644571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1895460 (9): Bad file descriptor 00:23:22.457 [2024-07-25 14:11:31.645566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:22.457 [2024-07-25 14:11:31.645594] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:22.457 [2024-07-25 14:11:31.645600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:22.457 request: 00:23:22.457 { 00:23:22.457 "name": "nvme0", 00:23:22.457 "trtype": "tcp", 00:23:22.457 "traddr": "127.0.0.1", 00:23:22.457 "adrfam": "ipv4", 00:23:22.457 "trsvcid": "4420", 00:23:22.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:22.457 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:22.457 "prchk_reftag": false, 00:23:22.457 "prchk_guard": false, 00:23:22.457 "hdgst": false, 00:23:22.457 "ddgst": false, 00:23:22.457 "psk": ":spdk-test:key1", 00:23:22.457 "method": "bdev_nvme_attach_controller", 00:23:22.457 "req_id": 1 00:23:22.457 } 00:23:22.457 Got JSON-RPC error response 00:23:22.457 response: 00:23:22.457 { 00:23:22.457 "code": -5, 00:23:22.457 "message": "Input/output error" 00:23:22.457 } 00:23:22.457 14:11:31 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:23:22.457 14:11:31 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:22.457 14:11:31 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:22.457 14:11:31 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@33 -- # sn=432086752 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 432086752 00:23:22.457 1 links removed 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@33 -- # sn=595556409 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 595556409 00:23:22.457 1 links removed 00:23:22.457 14:11:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84809 00:23:22.457 14:11:31 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 84809 ']' 00:23:22.457 14:11:31 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 84809 00:23:22.457 14:11:31 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:23:22.458 14:11:31 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.458 14:11:31 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84809 00:23:22.458 14:11:31 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:22.458 14:11:31 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:22.458 killing process with pid 84809 00:23:22.458 14:11:31 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84809' 00:23:22.458 Received shutdown signal, test time was about 1.000000 seconds 00:23:22.458 00:23:22.458 Latency(us) 00:23:22.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.458 =================================================================================================================== 00:23:22.458 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.458 14:11:31 keyring_linux -- common/autotest_common.sh@969 -- # kill 84809 00:23:22.458 14:11:31 keyring_linux -- common/autotest_common.sh@974 -- # wait 84809 00:23:22.715 14:11:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84791 00:23:22.715 14:11:31 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 84791 ']' 00:23:22.715 14:11:31 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 84791 00:23:22.715 14:11:31 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:23:22.715 14:11:31 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.715 14:11:31 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84791 00:23:22.715 14:11:31 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.715 14:11:31 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.715 killing process with pid 84791 00:23:22.716 14:11:31 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84791' 00:23:22.716 14:11:31 keyring_linux -- common/autotest_common.sh@969 -- # kill 84791 00:23:22.716 14:11:31 keyring_linux -- common/autotest_common.sh@974 -- # wait 84791 00:23:22.974 00:23:22.974 real 0m5.503s 00:23:22.974 user 0m10.120s 00:23:22.974 sys 0m1.458s 00:23:22.974 14:11:32 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.974 14:11:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:22.974 ************************************ 00:23:22.974 END TEST keyring_linux 00:23:22.974 ************************************ 00:23:23.232 14:11:32 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:23:23.232 14:11:32 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:23:23.232 14:11:32 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:23:23.232 14:11:32 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:23:23.232 14:11:32 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:23:23.232 14:11:32 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:23:23.232 14:11:32 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:23:23.232 14:11:32 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:23:23.232 14:11:32 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:23:23.232 14:11:32 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:23:23.232 14:11:32 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:23:23.232 14:11:32 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:23:23.232 14:11:32 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:23:23.232 14:11:32 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:23:23.232 14:11:32 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:23:23.232 14:11:32 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:23:23.232 14:11:32 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:23:23.232 14:11:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.232 14:11:32 -- common/autotest_common.sh@10 -- # set +x 00:23:23.232 14:11:32 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:23:23.232 14:11:32 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:23:23.232 14:11:32 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:23:23.232 14:11:32 -- common/autotest_common.sh@10 -- # set +x 00:23:25.758 INFO: APP EXITING 00:23:25.758 INFO: killing all VMs 00:23:25.758 INFO: killing vhost app 00:23:25.758 INFO: EXIT DONE 00:23:26.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:26.016 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:26.016 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:26.955 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:26.955 Cleaning 00:23:26.955 Removing: /var/run/dpdk/spdk0/config 00:23:26.955 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:26.955 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:26.955 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:26.955 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:26.955 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:26.955 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:26.955 Removing: /var/run/dpdk/spdk1/config 00:23:26.955 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:26.955 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:26.955 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:26.955 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:26.955 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:26.955 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:26.955 Removing: /var/run/dpdk/spdk2/config 00:23:26.955 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:26.955 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:26.955 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:26.955 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:26.955 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:26.955 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:26.955 Removing: /var/run/dpdk/spdk3/config 00:23:26.955 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:26.955 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:26.955 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:26.955 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:26.955 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:26.955 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:26.955 Removing: /var/run/dpdk/spdk4/config 00:23:26.955 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:26.955 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:27.217 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:27.217 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:27.217 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:27.217 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:27.217 Removing: /dev/shm/nvmf_trace.0 00:23:27.217 Removing: /dev/shm/spdk_tgt_trace.pid59068 00:23:27.217 Removing: /var/run/dpdk/spdk0 00:23:27.217 Removing: /var/run/dpdk/spdk1 00:23:27.217 Removing: /var/run/dpdk/spdk2 00:23:27.217 Removing: /var/run/dpdk/spdk3 00:23:27.217 Removing: /var/run/dpdk/spdk4 00:23:27.217 Removing: /var/run/dpdk/spdk_pid58923 00:23:27.218 Removing: /var/run/dpdk/spdk_pid59068 00:23:27.218 Removing: /var/run/dpdk/spdk_pid59266 00:23:27.218 Removing: /var/run/dpdk/spdk_pid59347 00:23:27.218 Removing: /var/run/dpdk/spdk_pid59375 00:23:27.218 Removing: /var/run/dpdk/spdk_pid59479 00:23:27.218 Removing: /var/run/dpdk/spdk_pid59497 00:23:27.218 Removing: /var/run/dpdk/spdk_pid59620 00:23:27.218 Removing: /var/run/dpdk/spdk_pid59810 00:23:27.218 Removing: /var/run/dpdk/spdk_pid59951 00:23:27.218 Removing: /var/run/dpdk/spdk_pid60021 00:23:27.218 Removing: /var/run/dpdk/spdk_pid60087 00:23:27.218 Removing: /var/run/dpdk/spdk_pid60178 00:23:27.218 Removing: /var/run/dpdk/spdk_pid60244 00:23:27.218 Removing: /var/run/dpdk/spdk_pid60288 00:23:27.218 Removing: /var/run/dpdk/spdk_pid60318 00:23:27.218 Removing: /var/run/dpdk/spdk_pid60385 00:23:27.218 Removing: /var/run/dpdk/spdk_pid60496 00:23:27.218 Removing: /var/run/dpdk/spdk_pid60917 00:23:27.218 Removing: /var/run/dpdk/spdk_pid60964 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61015 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61031 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61098 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61113 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61170 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61186 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61237 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61250 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61290 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61308 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61431 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61461 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61536 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61846 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61858 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61893 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61908 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61923 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61942 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61956 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61977 00:23:27.218 Removing: /var/run/dpdk/spdk_pid61996 00:23:27.218 Removing: /var/run/dpdk/spdk_pid62004 00:23:27.218 Removing: /var/run/dpdk/spdk_pid62025 00:23:27.218 Removing: /var/run/dpdk/spdk_pid62044 00:23:27.218 Removing: /var/run/dpdk/spdk_pid62059 00:23:27.218 Removing: /var/run/dpdk/spdk_pid62075 00:23:27.218 Removing: /var/run/dpdk/spdk_pid62094 00:23:27.218 Removing: /var/run/dpdk/spdk_pid62113 00:23:27.218 Removing: /var/run/dpdk/spdk_pid62123 00:23:27.218 Removing: /var/run/dpdk/spdk_pid62142 00:23:27.218 Removing: /var/run/dpdk/spdk_pid62161 00:23:27.218 Removing: /var/run/dpdk/spdk_pid62181 00:23:27.218 Removing: /var/run/dpdk/spdk_pid62207 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62226 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62260 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62314 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62348 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62359 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62387 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62402 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62404 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62452 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62462 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62496 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62506 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62515 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62525 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62534 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62544 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62553 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62563 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62591 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62618 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62628 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62661 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62671 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62678 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62719 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62730 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62757 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62770 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62772 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62785 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62787 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62800 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62807 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62815 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62889 00:23:27.476 Removing: /var/run/dpdk/spdk_pid62942 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63041 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63082 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63125 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63145 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63162 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63176 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63213 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63229 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63300 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63316 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63360 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63426 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63477 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63518 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63602 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63650 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63688 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63901 00:23:27.476 Removing: /var/run/dpdk/spdk_pid63993 00:23:27.476 Removing: /var/run/dpdk/spdk_pid64027 00:23:27.476 Removing: /var/run/dpdk/spdk_pid64365 00:23:27.476 Removing: /var/run/dpdk/spdk_pid64403 00:23:27.476 Removing: /var/run/dpdk/spdk_pid64699 00:23:27.476 Removing: /var/run/dpdk/spdk_pid65099 00:23:27.476 Removing: /var/run/dpdk/spdk_pid65358 00:23:27.476 Removing: /var/run/dpdk/spdk_pid66126 00:23:27.476 Removing: /var/run/dpdk/spdk_pid66940 00:23:27.476 Removing: /var/run/dpdk/spdk_pid67062 00:23:27.476 Removing: /var/run/dpdk/spdk_pid67124 00:23:27.476 Removing: /var/run/dpdk/spdk_pid68380 00:23:27.476 Removing: /var/run/dpdk/spdk_pid68636 00:23:27.734 Removing: /var/run/dpdk/spdk_pid71834 00:23:27.734 Removing: /var/run/dpdk/spdk_pid72141 00:23:27.734 Removing: /var/run/dpdk/spdk_pid72249 00:23:27.734 Removing: /var/run/dpdk/spdk_pid72377 00:23:27.734 Removing: /var/run/dpdk/spdk_pid72399 00:23:27.734 Removing: /var/run/dpdk/spdk_pid72427 00:23:27.734 Removing: /var/run/dpdk/spdk_pid72454 00:23:27.734 Removing: /var/run/dpdk/spdk_pid72541 00:23:27.734 Removing: /var/run/dpdk/spdk_pid72670 00:23:27.734 Removing: /var/run/dpdk/spdk_pid72816 00:23:27.734 Removing: /var/run/dpdk/spdk_pid72891 00:23:27.734 Removing: /var/run/dpdk/spdk_pid73079 00:23:27.734 Removing: /var/run/dpdk/spdk_pid73162 00:23:27.734 Removing: /var/run/dpdk/spdk_pid73249 00:23:27.734 Removing: /var/run/dpdk/spdk_pid73549 00:23:27.734 Removing: /var/run/dpdk/spdk_pid73968 00:23:27.734 Removing: /var/run/dpdk/spdk_pid73975 00:23:27.734 Removing: /var/run/dpdk/spdk_pid74244 00:23:27.734 Removing: /var/run/dpdk/spdk_pid74259 00:23:27.734 Removing: /var/run/dpdk/spdk_pid74279 00:23:27.734 Removing: /var/run/dpdk/spdk_pid74304 00:23:27.734 Removing: /var/run/dpdk/spdk_pid74309 00:23:27.734 Removing: /var/run/dpdk/spdk_pid74618 00:23:27.734 Removing: /var/run/dpdk/spdk_pid74675 00:23:27.735 Removing: /var/run/dpdk/spdk_pid74951 00:23:27.735 Removing: /var/run/dpdk/spdk_pid75148 00:23:27.735 Removing: /var/run/dpdk/spdk_pid75519 00:23:27.735 Removing: /var/run/dpdk/spdk_pid76021 00:23:27.735 Removing: /var/run/dpdk/spdk_pid76803 00:23:27.735 Removing: /var/run/dpdk/spdk_pid77392 00:23:27.735 Removing: /var/run/dpdk/spdk_pid77394 00:23:27.735 Removing: /var/run/dpdk/spdk_pid79303 00:23:27.735 Removing: /var/run/dpdk/spdk_pid79364 00:23:27.735 Removing: /var/run/dpdk/spdk_pid79419 00:23:27.735 Removing: /var/run/dpdk/spdk_pid79479 00:23:27.735 Removing: /var/run/dpdk/spdk_pid79596 00:23:27.735 Removing: /var/run/dpdk/spdk_pid79650 00:23:27.735 Removing: /var/run/dpdk/spdk_pid79709 00:23:27.735 Removing: /var/run/dpdk/spdk_pid79765 00:23:27.735 Removing: /var/run/dpdk/spdk_pid80077 00:23:27.735 Removing: /var/run/dpdk/spdk_pid81235 00:23:27.735 Removing: /var/run/dpdk/spdk_pid81370 00:23:27.735 Removing: /var/run/dpdk/spdk_pid81612 00:23:27.735 Removing: /var/run/dpdk/spdk_pid82165 00:23:27.735 Removing: /var/run/dpdk/spdk_pid82324 00:23:27.735 Removing: /var/run/dpdk/spdk_pid82480 00:23:27.735 Removing: /var/run/dpdk/spdk_pid82577 00:23:27.735 Removing: /var/run/dpdk/spdk_pid82752 00:23:27.735 Removing: /var/run/dpdk/spdk_pid82865 00:23:27.735 Removing: /var/run/dpdk/spdk_pid83521 00:23:27.735 Removing: /var/run/dpdk/spdk_pid83562 00:23:27.735 Removing: /var/run/dpdk/spdk_pid83597 00:23:27.735 Removing: /var/run/dpdk/spdk_pid83867 00:23:27.735 Removing: /var/run/dpdk/spdk_pid83901 00:23:27.735 Removing: /var/run/dpdk/spdk_pid83932 00:23:27.735 Removing: /var/run/dpdk/spdk_pid84434 00:23:27.735 Removing: /var/run/dpdk/spdk_pid84450 00:23:27.735 Removing: /var/run/dpdk/spdk_pid84678 00:23:27.735 Removing: /var/run/dpdk/spdk_pid84791 00:23:27.735 Removing: /var/run/dpdk/spdk_pid84809 00:23:27.735 Clean 00:23:27.992 14:11:37 -- common/autotest_common.sh@1451 -- # return 0 00:23:27.992 14:11:37 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:23:27.992 14:11:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:27.992 14:11:37 -- common/autotest_common.sh@10 -- # set +x 00:23:27.992 14:11:37 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:23:27.992 14:11:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:27.992 14:11:37 -- common/autotest_common.sh@10 -- # set +x 00:23:27.992 14:11:37 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:27.992 14:11:37 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:27.992 14:11:37 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:27.992 14:11:37 -- spdk/autotest.sh@395 -- # hash lcov 00:23:27.992 14:11:37 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:27.992 14:11:37 -- spdk/autotest.sh@397 -- # hostname 00:23:27.992 14:11:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:28.250 geninfo: WARNING: invalid characters removed from testname! 00:23:50.173 14:11:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:53.459 14:12:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:55.367 14:12:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:57.320 14:12:06 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:59.226 14:12:08 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:01.131 14:12:10 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:03.712 14:12:12 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:03.712 14:12:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:03.712 14:12:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:03.712 14:12:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.712 14:12:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.712 14:12:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.712 14:12:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.712 14:12:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.712 14:12:12 -- paths/export.sh@5 -- $ export PATH 00:24:03.712 14:12:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.712 14:12:12 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:24:03.712 14:12:12 -- common/autobuild_common.sh@447 -- $ date +%s 00:24:03.712 14:12:12 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721916732.XXXXXX 00:24:03.712 14:12:12 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721916732.gvoTGr 00:24:03.712 14:12:12 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:24:03.712 14:12:12 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:24:03.712 14:12:12 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:24:03.712 14:12:12 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:24:03.712 14:12:12 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:24:03.712 14:12:12 -- common/autobuild_common.sh@463 -- $ get_config_params 00:24:03.712 14:12:12 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:24:03.712 14:12:12 -- common/autotest_common.sh@10 -- $ set +x 00:24:03.712 14:12:12 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:24:03.712 14:12:12 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:24:03.712 14:12:12 -- pm/common@17 -- $ local monitor 00:24:03.712 14:12:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:03.712 14:12:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:03.712 14:12:12 -- pm/common@21 -- $ date +%s 00:24:03.712 14:12:12 -- pm/common@25 -- $ sleep 1 00:24:03.712 14:12:12 -- pm/common@21 -- $ date +%s 00:24:03.712 14:12:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721916732 00:24:03.712 14:12:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721916732 00:24:03.712 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721916732_collect-cpu-load.pm.log 00:24:03.712 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721916732_collect-vmstat.pm.log 00:24:04.279 14:12:13 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:24:04.279 14:12:13 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:24:04.279 14:12:13 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:24:04.279 14:12:13 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:24:04.279 14:12:13 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:24:04.279 14:12:13 -- spdk/autopackage.sh@19 -- $ timing_finish 00:24:04.279 14:12:13 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:04.279 14:12:13 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:24:04.279 14:12:13 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:04.279 14:12:13 -- spdk/autopackage.sh@20 -- $ exit 0 00:24:04.279 14:12:13 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:04.279 14:12:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:24:04.279 14:12:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:24:04.279 14:12:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:04.279 14:12:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:24:04.536 14:12:13 -- pm/common@44 -- $ pid=86560 00:24:04.536 14:12:13 -- pm/common@50 -- $ kill -TERM 86560 00:24:04.536 14:12:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:04.536 14:12:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:24:04.536 14:12:13 -- pm/common@44 -- $ pid=86562 00:24:04.536 14:12:13 -- pm/common@50 -- $ kill -TERM 86562 00:24:04.536 + [[ -n 5324 ]] 00:24:04.536 + sudo kill 5324 00:24:04.545 [Pipeline] } 00:24:04.565 [Pipeline] // timeout 00:24:04.572 [Pipeline] } 00:24:04.590 [Pipeline] // stage 00:24:04.622 [Pipeline] } 00:24:04.641 [Pipeline] // catchError 00:24:04.650 [Pipeline] stage 00:24:04.652 [Pipeline] { (Stop VM) 00:24:04.669 [Pipeline] sh 00:24:04.947 + vagrant halt 00:24:07.480 ==> default: Halting domain... 00:24:15.617 [Pipeline] sh 00:24:15.899 + vagrant destroy -f 00:24:18.435 ==> default: Removing domain... 00:24:18.707 [Pipeline] sh 00:24:18.991 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:24:19.003 [Pipeline] } 00:24:19.023 [Pipeline] // stage 00:24:19.029 [Pipeline] } 00:24:19.053 [Pipeline] // dir 00:24:19.059 [Pipeline] } 00:24:19.077 [Pipeline] // wrap 00:24:19.084 [Pipeline] } 00:24:19.099 [Pipeline] // catchError 00:24:19.108 [Pipeline] stage 00:24:19.110 [Pipeline] { (Epilogue) 00:24:19.125 [Pipeline] sh 00:24:19.404 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:24.722 [Pipeline] catchError 00:24:24.724 [Pipeline] { 00:24:24.738 [Pipeline] sh 00:24:25.021 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:25.021 Artifacts sizes are good 00:24:25.030 [Pipeline] } 00:24:25.047 [Pipeline] // catchError 00:24:25.059 [Pipeline] archiveArtifacts 00:24:25.066 Archiving artifacts 00:24:25.217 [Pipeline] cleanWs 00:24:25.227 [WS-CLEANUP] Deleting project workspace... 00:24:25.227 [WS-CLEANUP] Deferred wipeout is used... 00:24:25.233 [WS-CLEANUP] done 00:24:25.234 [Pipeline] } 00:24:25.251 [Pipeline] // stage 00:24:25.256 [Pipeline] } 00:24:25.271 [Pipeline] // node 00:24:25.276 [Pipeline] End of Pipeline 00:24:25.307 Finished: SUCCESS